Updates from: 10/21/2022 01:12:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Export Import Provisioning Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/expression-builder.md
Previously updated : 06/02/2021 Last updated : 10/20/2022
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 04/13/2022 Last updated : 10/20/2022
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 02/03/2022 Last updated : 10/20/2022
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr Manager Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-manager-update-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr User Creation Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-creation-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr User Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-update-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr Writeback Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-writeback-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Isv Automatic Provisioning Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Previously updated : 11/18/2021 Last updated : 10/20/2022
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
Previously updated : 11/17/2021 Last updated : 10/20/2022
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
Previously updated : 06/06/2021 Last updated : 10/20/2022
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
This article uses the following terms:
| - | - | | On-demand webinars| [Manage your Enterprise Applications with Azure AD](https://info.microsoft.com/CO-AZUREPLAT-WBNR-FY18-03Mar-06-ManageYourEnterpriseApplicationsOption1-MCW0004438_02OnDemandRegistration-ForminBody.html)<br>ΓÇÄLearn how Azure AD can help you achieve SSO to your enterprise SaaS applications and best practices for controlling access. | | Videos| [What is user provisioning in Active Azure Directory?](https://youtu.be/_ZjARPpI6NI) <br> [How to deploy user provisioning in Active Azure Directory?](https://youtu.be/pKzyts6kfrw) <br> [Integrating Salesforce with Azure AD: How to automate User Provisioning](https://azure.microsoft.com/resources/videos/integrating-salesforce-with-azure-ad-how-to-automate-user-provisioning/) |
-| Online courses| SkillUp Online: [Managing Identities](https://skillup.online/courses/course-v1:Microsoft+AZ-100.5+2018_T3/about) <br> Learn how to integrate Azure AD with many SaaS applications and to secure user access to those applications. |
+| Online courses| SkillUp Online: [Managing Identities](https://skillup.online/courses/course-v1:Microsoft+AZ-100.5+2018_T3/) <br> Learn how to integrate Azure AD with many SaaS applications and to secure user access to those applications. |
| Books| [Modern Authentication with Azure Active Directory for Web Applications (Developer Reference) 1st Edition](https://www.amazon.com/Authentication-Directory-Applications-Developer-Reference/dp/0735696942/ref=sr_1_fkmr0_1?keywords=Azure+multifactor+authentication&qid=1550168894&s=gateway&sr=8-1-fkmr0). <br> ΓÇÄThis is an authoritative, deep-dive guide to building Active Directory authentication solutions for these new environments. | | Tutorials| See the [list of tutorials on how to integrate SaaS apps with Azure AD](../saas-apps/tutorial-list.md). | | FAQ| [Frequently asked questions](../app-provisioning/user-provisioning.md) on automated user provisioning |
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 07/13/2021 Last updated : 10/20/2022
active-directory Provisioning Agent Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provisioning-agent-release-version-history.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Sap Successfactors Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Previously updated : 10/11/2021 Last updated : 10/20/2022
active-directory Scim Graph Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-graph-scenarios.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Previously updated : 08/24/2021 Last updated : 10/20/2022
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 12/08/2021 Last updated : 10/20/2022
active-directory What Is Hr Driven Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/what-is-hr-driven-provisioning.md
Previously updated : 10/30/2020 Last updated : 10/20/2022
active-directory Workday Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-attribute-reference.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
Previously updated : 06/01/2021 Last updated : 10/20/2022
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 08/07/2022 Last updated : 09/12/2022
Once any errors have been addressed, the administrator then can activate each ke
Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time. Hardware OATH tokens cannot be assigned to guest users in the resource tenant. >[!IMPORTANT]
->The preview is not supported in Azure Government or sovereign clouds.
+>The preview is only supported in Azure Global and Azure Government clouds.
## Next steps
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
Previously updated : 03/22/2022 Last updated : 09/15/2022
This following tables list Azure AD feature availability in Azure Government.
|**Authentication, single sign-on, and MFA**|Cloud authentication (Pass-through authentication, password hash synchronization) | &#x2705; | || Federated authentication (Active Directory Federation Services or federation with other identity providers) | &#x2705; | || Single sign-on (SSO) unlimited | &#x2705; |
-|| Multifactor authentication (MFA) | Hardware OATH tokens are not available. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address. Microsoft Authenticator only shows GUID and not UPN for compliance reasons. |
+|| Multifactor authentication (MFA) <sup>1</sup>| &#x2705; |
|| Passwordless (Windows Hello for Business, Microsoft Authenticator, FIDO2 security key integrations) | &#x2705; | || Service-level agreement | &#x2705; | |**Applications access**|SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | &#x2705; |
This following tables list Azure AD feature availability in Azure Government.
|| Identity Protection: vulnerabilities and risky accounts | &#x2705; | || Identity Protection: risk events investigation, SIEM connectivity | &#x2705; | |**Frontline workers**|SMS sign-in | Feature not available. |
-|| Shared device sign-out | Enterprise state roaming for Windows 10 devices is not available. |
+|| Shared device sign-out | Enterprise state roaming for Windows 10 devices isn't available. |
|| Delegated user management portal (My Staff) | Feature not available. |
+<sup>1</sup>Microsoft Authenticator only shows GUID and not UPN for compliance reasons.
## Identity protection
active-directory Plan Cloud Sync Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/plan-cloud-sync-topologies.md
The information below should be kept in mind, when selecting a solution.
- Users and groups must be uniquely identified across all forests - Matching across forests doesn't occur with cloud sync-- A user or group must be represented only once across all forests - The source anchor for objects is chosen automatically. It uses ms-DS-ConsistencyGuid if present, otherwise ObjectGUID is used. - You can't change the attribute that is used for source anchor.
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
Title: Build apps that sign in Azure AD users
-description: Shows how to build a multi-tenant application that can sign in a user from any Azure Active Directory tenant.
+ Title: Convert single-tenant app to multi-tenant on Azure AD
+description: Shows how to convert an existing single-tenant app to a multi-tenant app that can sign in a user from any Azure AD tenant.
- Previously updated : 10/27/2020 Last updated : 10/20/2022 -+
+#Customer intent: As an Azure user, I want to convert a single tenant app to an Azure AD multi-tenant app so any Azure AD user can sign in,
-# Sign in any Azure Active Directory user using the multi-tenant application pattern
-
-If you offer a Software as a Service (SaaS) application to many organizations, you can configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. This configuration is called *making your application multi-tenant*. Users in any Azure AD tenant will be able to sign in to your application after consenting to use their account with your application.
+# Making your application multi-tenant
-If you have an existing application that has its own account system, or supports other kinds of sign-ins from other cloud providers, adding Azure AD sign-in from any tenant is simple. Just register your app, add sign-in code via OAuth2, OpenID Connect, or SAML, and put a ["Sign in with Microsoft" button][AAD-App-Branding] in your application.
+If you offer a Software as a Service (SaaS) application to many organizations, you can configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant by converting it to multi-tenant. Users in any Azure AD tenant will be able to sign in to your application after consenting to use their account with your application.
-> [!NOTE]
-> This article assumes youΓÇÖre already familiar with building a single-tenant application for Azure AD. If youΓÇÖre not, start with one of the quickstarts on the [developer guide homepage][AAD-Dev-Guide].
+For existing apps with its own account system (or other sign-ins from other cloud providers), you should add sign-in code via OAuth2, OpenID Connect, or SAML, and put a ["Sign in with Microsoft" button][AAD-App-Branding] in your application.
-There are four steps to convert your application into an Azure AD multi-tenant app:
+In this how-to guide, you'll undertake the four steps needed to convert a single tenant app into an Azure AD multi-tenant app:
1. [Update your application registration to be multi-tenant](#update-registration-to-be-multi-tenant)
-2. [Update your code to send requests to the /common endpoint](#update-your-code-to-send-requests-to-common)
+2. [Update your code to send requests to the `/common` endpoint](#update-your-code-to-send-requests-to-common)
3. [Update your code to handle multiple issuer values](#update-your-code-to-handle-multiple-issuer-values)
-4. [Understand user and admin consent and make appropriate code changes](#understand-user-and-admin-consent)
+4. [Understand user and admin consent and make appropriate code changes](#understand-user-and-admin-consent-and-make-appropriate-code-changes)
-LetΓÇÖs look at each step in detail. You can also jump straight to the sample [Build a multi-tenant SaaS web application that calls Microsoft Graph using Azure AD and OpenID Connect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md).
+You can also refer to the sample; [Build a multi-tenant SaaS web application that calls Microsoft Graph using Azure AD and OpenID Connect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md). This how-to assumes familiarity with building a single-tenant application for Azure AD. If not, start with one of the quickstarts on the [developer guide homepage][AAD-Dev-Guide].
## Update registration to be multi-tenant
-By default, web app/API registrations in Azure AD are single-tenant. You can make your registration multi-tenant by finding the **Supported account types** switch on the **Authentication** pane of your application registration in the [Azure portal][AZURE-portal] and setting it to **Accounts in any organizational directory**.
-
-Before an application can be made multi-tenant, Azure AD requires the App ID URI of the application to be globally unique. The App ID URI is one of the ways an application is identified in protocol messages. For a single-tenant application, it is sufficient for the App ID URI to be unique within that tenant. For a multi-tenant application, it must be globally unique so Azure AD can find the application across all tenants. Global uniqueness is enforced by requiring the App ID URI to have a host name that matches a verified domain of the Azure AD tenant.
-
-By default, apps created via the Azure portal have a globally unique App ID URI set on app creation, but you can change this value. For example, if the name of your tenant was contoso.onmicrosoft.com then a valid App ID URI would be `https://contoso.onmicrosoft.com/myapp`. If your tenant had a verified domain of `contoso.com`, then a valid App ID URI would also be `https://contoso.com/myapp`. If the App ID URI doesnΓÇÖt follow this pattern, setting an application as multi-tenant fails.
+By default, web app/API registrations in Azure AD are single-tenant upon creation. To make the registration multi-tenant, look for the **Supported account types** section on the **Authentication** pane of the application registration in the [Azure portal][AZURE-portal]. Change the setting to **Accounts in any organizational directory**.
-## Update your code to send requests to /common
+When a single-tenant application is created via the Azure portal, one of the items listed on the **Overview** page is the **Application ID URI**. This is one of the ways an application is identified in protocol messages, and can be added at any time. The App ID URI for single tenant apps can be globally unique within that tenant. In contrast, for multi-tenant apps it must be globally unique across all tenants, which ensures that Azure AD can find the app across all tenants.
-In a single-tenant application, sign-in requests are sent to the tenantΓÇÖs sign-in endpoint. For example, for contoso.onmicrosoft.com the endpoint would be: `https://login.microsoftonline.com/contoso.onmicrosoft.com`. Requests sent to a tenantΓÇÖs endpoint can sign in users (or guests) in that tenant to applications in that tenant.
+For example, if the name of your tenant was `contoso.onmicrosoft.com` then a valid App ID URI would be `https://contoso.onmicrosoft.com/myapp`. If the App ID URI doesnΓÇÖt follow this pattern, setting an application as multi-tenant fails.
-With a multi-tenant application, the application doesnΓÇÖt know up front what tenant the user is from, so you canΓÇÖt send requests to a tenantΓÇÖs endpoint. Instead, requests are sent to an endpoint that multiplexes across all Azure AD tenants: `https://login.microsoftonline.com/common`
+## Update your code to send requests to `/common`
-When the Microsoft identity platform receives a request on the /common endpoint, it signs the user in and, as a consequence, discovers which tenant the user is from. The /common endpoint works with all of the authentication protocols supported by the Azure AD: OpenID Connect, OAuth 2.0, SAML 2.0, and WS-Federation.
+With a multi-tenant application, because the application can't immediately tell which tenant the user is from, requests can't be sent to a tenantΓÇÖs endpoint. Instead, requests are sent to an endpoint that multiplexes across all Azure AD tenants: `https://login.microsoftonline.com/common`.
-The sign-in response to the application then contains a token representing the user. The issuer value in the token tells an application what tenant the user is from. When a response returns from the /common endpoint, the issuer value in the token corresponds to the userΓÇÖs tenant.
+Edit your code and change the value for your tenant to `/common`. It's important to note that this endpoint isn't a tenant or an issuer itself. When the Microsoft identity platform receives a request on the `/common` endpoint, it signs the user in, thereby discovering which tenant the user is from. This endpoint works with all of the authentication protocols supported by the Azure AD (OpenID Connect, OAuth 2.0, SAML 2.0, WS-Federation).
-> [!IMPORTANT]
-> The /common endpoint is not a tenant and is not an issuer, itΓÇÖs just a multiplexer. When using /common, the logic in your application to validate tokens needs to be updated to take this into account.
+The sign-in response to the application then contains a token representing the user. The issuer value in the token tells an application what tenant the user is from. When a response returns from the `/common` endpoint, the issuer value in the token corresponds to the userΓÇÖs tenant.
## Update your code to handle multiple issuer values
-Web applications and web APIs receive and validate tokens from the Microsoft identity platform.
+Web applications and web APIs receive and validate tokens from the Microsoft identity platform. Native client applications don't validate access tokens and must treat them as opaque. They instead request and receive tokens from the Microsoft identity platform, and do so to send them to APIs, where they're then validated. Multi-tenant applications canΓÇÖt validate tokens by matching the issuer value in the metadata with the `issuer` value in the token. A multi-tenant application needs logic to decide which issuer values are valid and which aren't based on the tenant ID portion of the issuer value.
-> [!NOTE]
-> While native client applications request and receive tokens from the Microsoft identity platform, they do so to send them to APIs, where they are validated. Native applications do not validate access tokens and must treat them as opaque.
+For example, if a multi-tenant application only allows sign-in from specific tenants who have signed up for their service, then it must check either the `issuer` value or the `tid` claim value in the token to make sure that tenant is in their list of subscribers. If a multi-tenant application only deals with individuals and doesnΓÇÖt make any access decisions based on tenants, then it can ignore the issuer value altogether.
-LetΓÇÖs look at how an application validates tokens it receives from the Microsoft identity platform. A single-tenant application normally takes an endpoint value like:
+In the [multi-tenant samples][AAD-Samples-MT], issuer validation is disabled to enable any Azure AD tenant to sign in. Because the `/common` endpoint doesnΓÇÖt correspond to a tenant and isnΓÇÖt an issuer, when you examine the issuer value in the metadata for `/common`, it has a templated URL instead of an actual value:
```http
-https://login.microsoftonline.com/contoso.onmicrosoft.com
+https://sts.windows.net/{tenantid}/
```
+To ensure your app can support multiple tenants, modify the relevant section of your code to ensure that your issuer value is set to `{tenantid}`.
-...and uses it to construct a metadata URL (in this case, OpenID Connect) like:
+In contrast, single-tenant applications normally take endpoint values to construct metadata URLs such as:
```http https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration
Each Azure AD tenant has a unique issuer value of the form:
https://sts.windows.net/31537af4-6d77-4bb9-a681-d2394888ea26/ ```
-...where the GUID value is the rename-safe version of the tenant ID of the tenant. If you select the preceding metadata link for `contoso.onmicrosoft.com`, you can see this issuer value in the document.
+...where the GUID value is the rename-safe version of the tenant ID of the tenant.
When a single-tenant application validates a token, it checks the signature of the token against the signing keys from the metadata document. This test allows it to make sure the issuer value in the token matches the one that was found in the metadata document.
-Because the /common endpoint doesnΓÇÖt correspond to a tenant and isnΓÇÖt an issuer, when you examine the issuer value in the metadata for /common it has a templated URL instead of an actual value:
-
-```http
-https://sts.windows.net/{tenantid}/
-```
-
-Therefore, a multi-tenant application canΓÇÖt validate tokens just by matching the issuer value in the metadata with the `issuer` value in the token. A multi-tenant application needs logic to decide which issuer values are valid and which are not based on the tenant ID portion of the issuer value.
-
-For example, if a multi-tenant application only allows sign-in from specific tenants who have signed up for their service, then it must check either the issuer value or the `tid` claim value in the token to make sure that tenant is in their list of subscribers. If a multi-tenant application only deals with individuals and doesnΓÇÖt make any access decisions based on tenants, then it can ignore the issuer value altogether.
-
-In the [multi-tenant samples][AAD-Samples-MT], issuer validation is disabled to enable any Azure AD tenant to sign in.
-
-## Understand user and admin consent
+## Understand user and admin consent and make appropriate code changes
-For a user to sign in to an application in Azure AD, the application must be represented in the userΓÇÖs tenant. This allows the organization to do things like apply unique policies when users from their tenant sign in to the application. For a single-tenant application, this registration easier; itΓÇÖs the one that happens when you register the application in the [Azure portal][AZURE-portal].
+For a user to sign in to an application in Azure AD, the application must be represented in the userΓÇÖs tenant. This allows the organization to do things like apply unique policies when users from their tenant sign in to the application. For a single-tenant application, one can use the registration via the [Azure portal][AZURE-portal].
-For a multi-tenant application, the initial registration for the application lives in the Azure AD tenant used by the developer. When a user from a different tenant signs in to the application for the first time, Azure AD asks them to consent to the permissions requested by the application. If they consent, then a representation of the application called a *service principal* is created in the userΓÇÖs tenant, and sign-in can continue. A delegation is also created in the directory that records the userΓÇÖs consent to the application. For details on the application's Application and ServicePrincipal objects, and how they relate to each other, see [Application objects and service principal objects][AAD-App-SP-Objects].
+For a multi-tenant application, the initial registration for the application resides in the Azure AD tenant used by the developer. When a user from a different tenant signs in to the application for the first time, Azure AD asks them to consent to the permissions requested by the application. If they consent, then a representation of the application called a *service principal* is created in the userΓÇÖs tenant, and sign-in can continue. A delegation is also created in the directory that records the userΓÇÖs consent to the application. For details on the application's Application and ServicePrincipal objects, and how they relate to each other, see [Application objects and service principal objects][AAD-App-SP-Objects].
-![Illustrates consent to single-tier app][Consent-Single-Tier]
+![Diagram which illustrates a user's consent to a single-tier app.][Consent-Single-Tier]
This consent experience is affected by the permissions requested by the application. The Microsoft identity platform supports two kinds of permissions, app-only and delegated.
To learn more about user and admin consent, see [Configure the admin consent wor
App-only permissions always require a tenant administratorΓÇÖs consent. If your application requests an app-only permission and a user tries to sign in to the application, an error message is displayed saying the user isnΓÇÖt able to consent.
-Certain delegated permissions also require a tenant administratorΓÇÖs consent. For example, the ability to write back to Azure AD as the signed in user requires a tenant administratorΓÇÖs consent. Like app-only permissions, if an ordinary user tries to sign in to an application that requests a delegated permission that requires administrator consent, your application receives an error. Whether a permission requires admin consent is determined by the developer that published the resource, and can be found in the documentation for the resource. The permissions documentation for the [Microsoft Graph API][MSFT-Graph-permission-scopes] indicate which permissions require admin consent.
+Certain delegated permissions also require a tenant administratorΓÇÖs consent. For example, the ability to write back to Azure AD as the signed in user requires a tenant administratorΓÇÖs consent. Like app-only permissions, if an ordinary user tries to sign in to an application that requests a delegated permission that requires administrator consent, the app receives an error. Whether a permission requires admin consent is determined by the developer that published the resource, and can be found in the documentation for the resource. The permissions documentation for the [Microsoft Graph API][MSFT-Graph-permission-scopes] indicate which permissions require admin consent.
-If your application uses permissions that require admin consent, have a gesture such as a button or link where the admin can initiate the action. The request your application sends for this action is the usual OAuth2/OpenID Connect authorization request that also includes the `prompt=consent` query string parameter. Once the admin has consented and the service principal is created in the customerΓÇÖs tenant, subsequent sign-in requests do not need the `prompt=consent` parameter. Since the administrator has decided the requested permissions are acceptable, no other users in the tenant are prompted for consent from that point forward.
+If your application uses permissions that require admin consent, consider adding a button or link where the admin can initiate the action. The request your application sends for this action is the usual OAuth2/OpenID Connect authorization request that also includes the `prompt=consent` query string parameter. Once the admin has consented and the service principal is created in the customerΓÇÖs tenant, subsequent sign-in requests don't need the `prompt=consent` parameter. Since the administrator has decided the requested permissions are acceptable, no other users in the tenant are prompted for consent from that point forward.
A tenant administrator can disable the ability for regular users to consent to applications. If this capability is disabled, admin consent is always required for the application to be used in the tenant. If you want to test your application with end-user consent disabled, you can find the configuration switch in the [Azure portal][AZURE-portal] in the **[User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/UserSettings/menuId/)** section under **Enterprise applications**.
-The `prompt=consent` parameter can also be used by applications that request permissions that do not require admin consent. An example of when this would be used is if the application requires an experience where the tenant admin ΓÇ£signs upΓÇ¥ one time, and no other users are prompted for consent from that point on.
+The `prompt=consent` parameter can also be used by applications that request permissions that don't require admin consent. An example of when this would be used is if the application requires an experience where the tenant admin ΓÇ£signs upΓÇ¥ one time, and no other users are prompted for consent from that point on.
If an application requires admin consent and an admin signs in without the `prompt=consent` parameter being sent, when the admin successfully consents to the application it will apply **only for their user account**. Regular users will still not be able to sign in or consent to the application. This feature is useful if you want to give the tenant administrator the ability to explore your application before allowing other users access.
This can be a problem if your logical application consists of two or more applic
This is demonstrated in a multi-tier native client calling web API sample in the [Related content](#related-content) section at the end of this article. The following diagram provides an overview of consent for a multi-tier app registered in a single tenant.
-![Illustrates consent to multi-tier known client app][Consent-Multi-Tier-Known-Client]
+![Diagram which illustrates consent to multi-tier known client app.][Consent-Multi-Tier-Known-Client]
#### Multiple tiers in multiple tenants
If it's an API built by an organization other than Microsoft, the developer of t
The following diagram provides an overview of consent for a multi-tier app registered in different tenants.
-![Illustrates consent to multi-tier multi-party app][Consent-Multi-Tier-Multi-Party]
+![Diagram which illustrates consent to multi-tier multi-party app.][Consent-Multi-Tier-Multi-Party]
### Revoking consent
Users and administrators can revoke consent to your application at any time:
* Users revoke access to individual applications by removing them from their [Access Panel Applications][AAD-Access-Panel] list. * Administrators revoke access to applications by removing them using the [Enterprise applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps) section of the [Azure portal][AZURE-portal].
-If an administrator consents to an application for all users in a tenant, users cannot revoke access individually. Only the administrator can revoke access, and only for the whole application.
+If an administrator consents to an application for all users in a tenant, users can't revoke access individually. Only the administrator can revoke access, and only for the whole application.
## Multi-tenant applications and caching access tokens
-Multi-tenant applications can also get access tokens to call APIs that are protected by Azure AD. A common error when using the Microsoft Authentication Library (MSAL) with a multi-tenant application is to initially request a token for a user using /common, receive a response, then request a subsequent token for that same user also using /common. Because the response from Azure AD comes from a tenant, not /common, MSAL caches the token as being from the tenant. The subsequent call to /common to get an access token for the user misses the cache entry, and the user is prompted to sign in again. To avoid missing the cache, make sure subsequent calls for an already signed in user are made to the tenantΓÇÖs endpoint.
+Multi-tenant applications can also get access tokens to call APIs that are protected by Azure AD. A common error when using the Microsoft Authentication Library (MSAL) with a multi-tenant application is to initially request a token for a user using `/common`, receive a response, then request a subsequent token for that same user also using `/common`. Because the response from Azure AD comes from a tenant, not `/common`, MSAL caches the token as being from the tenant. The subsequent call to `/common` to get an access token for the user misses the cache entry, and the user is prompted to sign in again. To avoid missing the cache, make sure subsequent calls for an already signed in user are made to the tenantΓÇÖs endpoint.
## Related content
Multi-tenant applications can also get access tokens to call APIs that are prote
## Next steps
-In this article, you learned how to build an application that can sign in a user from any Azure AD tenant. After enabling Single Sign-On (SSO) between your app and Azure AD, you can also update your application to access APIs exposed by Microsoft resources like Microsoft 365. This lets you offer a personalized experience in your application, such as showing contextual information to the users, like their profile picture or their next calendar appointment.
+In this article, you learned how to convert a single tenant application to a multi-tenant application. After enabling single sign-on (SSO) between your app and Azure AD, update your app to access APIs exposed by Microsoft resources like Microsoft 365. This lets you offer a personalized experience in your application, such as showing contextual information to the users, for example, profile pictures and calendar appointments.
To learn more about making API calls to Azure AD and Microsoft 365 services like Exchange, SharePoint, OneDrive, OneNote, and more, visit [Microsoft Graph API][MSFT-Graph-overview].
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
> > ### Requesting tokens >
-> In the first leg of authorization code flow with PKCE, prepare and send an authorization code request with the appropriate parameters. Then, in the second leg of the flow, listen for the authorization code response. Once the code is obtained, exchange it to obtain a token.
+> You can use MSAL Node's acquireTokenInteractive public API to acquire tokens via an external user-agent such as the default system browser.
> > ```javascript
-> // The redirect URI you setup during app registration with a custom file protocol "msal"
-> const redirectUri = "msal://redirect";
->
-> const cryptoProvider = new CryptoProvider();
->
-> const pkceCodes = {
-> challengeMethod: "S256", // Use SHA256 Algorithm
-> verifier: "", // Generate a code verifier for the Auth Code Request first
-> challenge: "" // Generate a code challenge from the previously generated code verifier
-> };
->
-> /**
-> * Starts an interactive token request
-> * @param {object} authWindow: Electron window object
-> * @param {object} tokenRequest: token request object with scopes
-> */
-> async function getTokenInteractive(authWindow, tokenRequest) {
->
-> /**
-> * Proof Key for Code Exchange (PKCE) Setup
-> *
-> * MSAL enables PKCE in the Authorization Code Grant Flow by including the codeChallenge and codeChallengeMethod
-> * parameters in the request passed into getAuthCodeUrl() API, as well as the codeVerifier parameter in the
-> * second leg (acquireTokenByCode() API).
-> */
->
-> const {verifier, challenge} = await cryptoProvider.generatePkceCodes();
->
-> pkceCodes.verifier = verifier;
-> pkceCodes.challenge = challenge;
->
-> const authCodeUrlParams = {
-> redirectUri: redirectUri
-> scopes: tokenRequest.scopes,
-> codeChallenge: pkceCodes.challenge, // PKCE Code Challenge
-> codeChallengeMethod: pkceCodes.challengeMethod // PKCE Code Challenge Method
-> };
->
-> const authCodeUrl = await pca.getAuthCodeUrl(authCodeUrlParams);
->
-> // register the custom file protocol in redirect URI
-> protocol.registerFileProtocol(redirectUri.split(":")[0], (req, callback) => {
-> const requestUrl = url.parse(req.url, true);
-> callback(path.normalize(`${__dirname}/${requestUrl.path}`));
-> });
->
-> const authCode = await listenForAuthCode(authCodeUrl, authWindow); // see below
->
-> const authResponse = await pca.acquireTokenByCode({
-> redirectUri: redirectUri,
-> scopes: tokenRequest.scopes,
-> code: authCode,
-> codeVerifier: pkceCodes.verifier // PKCE Code Verifier
-> });
->
-> return authResponse;
-> }
->
-> /**
-> * Listens for auth code response from Azure AD
-> * @param {string} navigateUrl: URL where auth code response is parsed
-> * @param {object} authWindow: Electron window object
-> */
-> async function listenForAuthCode(navigateUrl, authWindow) {
->
-> authWindow.loadURL(navigateUrl);
->
-> return new Promise((resolve, reject) => {
-> authWindow.webContents.on('will-redirect', (event, responseUrl) => {
-> try {
-> const parsedUrl = new URL(responseUrl);
-> const authCode = parsedUrl.searchParams.get('code');
-> resolve(authCode);
-> } catch (err) {
-> reject(err);
-> }
-> });
-> });
+> const { shell } = require('electron');
+>
+> try {
+> const openBrowser = async (url) => {
+> await shell.openExternal(url);
+> };
+>
+> const authResponse = await pca.acquireTokenInteractive({
+> scopes: ["User.Read"],
+> openBrowser,
+> successTemplate: '<h1>Successfully signed in!</h1> <p>You can close this window now.</p>',
+> failureTemplate: '<h1>Oops! Something went wrong</h1> <p>Check the console for more information.</p>',
+> });
+>
+> return authResponse;
+> } catch (error) {
+> throw error;
> } > ```
->
-> > |Where:| Description |
-> > |||
-> > | `authWindow` | Current Electron window in process. |
-> > | `tokenRequest` | Contains the scopes being requested, such as `"User.Read"` for Microsoft Graph or `"api://<Application ID>/access_as_user"` for custom web APIs. |
+>
> > ## Next steps >
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
First, complete the steps in [Register an application with the Microsoft identit
Use the following settings for your app registration: - Name: `ElectronDesktopApp` (suggested)-- Supported account types: **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**
+- Supported account types: **Accounts in my organizational directory only (single tenant)**
- Platform type: **Mobile and desktop applications**-- Redirect URI: `msal{Your_Application/Client_Id}://auth`
+- Redirect URI: `http://localhost`
## Create the project
Create a folder to host your application, for example *ElectronDesktopApp*.
```console npm init -y
- npm install --save @azure/msal-node axios bootstrap dotenv jquery popper.js
- npm install --save-dev babel electron@18.2.3 webpack
+ npm install --save @azure/msal-node @microsoft/microsoft-graph-sdk isomorphic-fetch bootstrap jquery popper.js
+ npm install --save-dev electron@20.0.0
``` 2. Then, create a folder named *App*. Inside this folder, create a file named *https://docsupdatetracker.net/index.html* that will serve as UI. Add the following code there:
The renderer methods are exposed by the preload script found in the *preload.js*
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/preload.js":::
-This preload script exposes a renderer methods to give the renderer process controlled access to some `Node APIs` by applying IPC channels that have been configured for communication between the main and renderer processes.
+This preload script exposes a renderer API to give the renderer process controlled access to some `Node APIs` by applying IPC channels that have been configured for communication between the main and renderer processes.
-6. Next, create *UIManager.js* class inside the *App* folder and add the following code:
-
- :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/UIManager.js":::
-
-7. After that, create *CustomProtocolListener.js* class and add the following code there:
-
- :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/CustomProtocolListener.js":::
-
-*CustomProtocolListener* class can be instantiated in order to register and unregister a custom typed protocol on which MSAL Node can listen for Auth Code responses.
-
-8. Finally, create a file named *constants.js* that will store the strings constants for describing the application **events**:
+6. Finally, create a file named *constants.js* that will store the strings constants for describing the application **events**:
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/constants.js":::
ElectronDesktopApp/
├── App │   ├── AuthProvider.js │   ├── constants.js
-│   ├── CustomProtocolListener.js
-│   ├── fetch.js
+│   ├── graph.js
│   ├── https://docsupdatetracker.net/index.html | ├── main.js | ├── preload.js | ├── renderer.js
-│   ├── UIManager.js
│   ├── authConfig.js ├── package.json ```
In *App* folder, create a file named *AuthProvider.js*. The *AuthProvider.js* fi
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/AuthProvider.js":::
-In the code snippet above, we first initialized MSAL Node `PublicClientApplication` by passing a configuration object (`msalConfig`). We then exposed `login`, `logout` and `getToken` methods to be called by main module (*main.js*). In `login` and `getToken`, we acquire ID and access tokens, respectively, by first requesting an authorization code and then exchanging this with a token using MSAL Node `acquireTokenByCode` public API.
+In the code snippet above, we first initialized MSAL Node `PublicClientApplication` by passing a configuration object (`msalConfig`). We then exposed `login`, `logout` and `getToken` methods to be called by main module (*main.js*). In `login` and `getToken`, we acquire ID and access tokens using MSAL Node `acquireTokenInteractive` public API.
-## Add a method to call a web API
+## Add Microsoft Graph SDK
-Create another file named *fetch.js*. This file will contain an Axios HTTP client for making REST calls to the Microsoft Graph API.
+Create a file named *graph.js*. The *graph.js* file will contain an instance of the Microsoft Graph SDK Client to facilitate accessing data on the Microsoft Graph API, using the access token obtained by MSAL Node:
## Add app registration details
-Finally, create an environment file to store the app registration details that will be used when acquiring tokens. To do so, create a file named *authConfig.js* inside the root folder of the sample (*ElectronDesktopApp*), and add the following code:
+Create an environment file to store the app registration details that will be used when acquiring tokens. To do so, create a file named *authConfig.js* inside the root folder of the sample (*ElectronDesktopApp*), and add the following code:
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/authConfig.js":::
Fill in these details with the values you obtain from Azure app registration por
- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered. - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com/`. - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).-- `Enter_the_Redirect_Uri_Here`: The Redirect Uri of the application you registered `msal{Your_Application/Client_Id}:///auth`. - `Enter_the_Graph_Endpoint_Here` is the instance of the Microsoft Graph API the application should communicate with. - For the **global** Microsoft Graph API endpoint, replace both instances of this string with `https://graph.microsoft.com/`. - For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.
If you consent to the requested permissions, the web applications displays your
## Test web API call
-After you sign in, select **See Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API:
+After you sign in, select **See Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API. After consent, you'll view the profile information returned in the response:
:::image type="content" source="media/tutorial-v2-nodejs-desktop/desktop-04-profile.png" alt-text="profile information from Microsoft Graph":::
-Select **Read Mails** to view the messages in user's account. You'll be presented with a consent screen:
--
-After consent, you'll view the messages returned in the response from the call to the Microsoft Graph API:
-- ## How the application works
-When a user selects the **Sign In** button for the first time, get `getTokenInteractive` method of *AuthProvider.js* is called. This method redirects the user to sign-in with the Microsoft identity platform endpoint and validates the user's credentials, and then obtains an **authorization code**. This code is then exchanged for an access token using `acquireTokenByCode` public API of MSAL Node.
-
-At this point, a PKCE-protected authorization code is sent to the CORS-protected token endpoint and is exchanged for tokens. An ID token, access token, and refresh token are received by your application and processed by MSAL Node, and the information contained in the tokens is cached.
+When a user selects the **Sign In** button for the first time, the `acquireTokenInteractive` method of MSAL Node. This method redirects the user to sign-in with the Microsoft identity platform endpoint and validates the user's credentials, obtains an **authorization code** and then exchanges that code for an ID token, access token, and refresh token. MSAL Node also caches these tokens for future use.
The ID token contains basic information about the user, like their display name. The access token has a limited lifetime and expires after 24 hours. If you plan to use these tokens for accessing protected resource, your back-end server *must* validate it to guarantee the token was issued to a valid user for your application.
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
When Azure AD organizations in separate Microsoft Azure clouds need to collaborate, they can use Microsoft cloud settings to enable Azure AD B2B collaboration. B2B collaboration is available between the following global and sovereign Microsoft Azure clouds: -- Microsoft Azure global cloud and Microsoft Azure Government-- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+- Microsoft Azure commercial cloud and Microsoft Azure Government
+- Microsoft Azure commercial cloud and Microsoft Azure China 21Vianet
To set up B2B collaboration between partner organizations in different Microsoft Azure clouds, each partner mutually agrees to configure B2B collaboration with each other. In each organization, an admin completes the following steps:
Follow these steps to add the tenant you want to collaborate with to your Organi
![Screenshot showing an organization added with default settings.](media/cross-cloud-settings/org-specific-settings-inherited.png) - 1. If you want to change the cross-tenant access settings for this organization, select the **Inherited from default** link under the **Inbound access** or **Outbound access** column. Then follow the detailed steps in these sections: - [Modify inbound access settings](cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings) - [Modify outbound access settings](cross-tenant-access-settings-b2b-collaboration.md#modify-outbound-access-settings)
+## Sign-in endpoints
+
+After enabling collaboration with an organization from a different Microsoft cloud, cross-cloud Azure AD guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their Azure AD credentials.
+
+Cross-cloud Azure AD guest users can also use application endpoints that include your tenant information, for example:
+
+ * `https://myapps.microsoft.com/?tenantid=<your tenant ID>`
+ * `https://myapps.microsoft.com/<your verified domain>.onmicrosoft.com`
+ * `https://contoso.sharepoint.com/sites/testsite`
+
+You can also give cross-cloud Azure AD guest users a direct link to an application or resource by including your tenant information, for example `https://myapps.microsoft.com/signin/Twitter/<application ID?tenantId=<your tenant ID>`.
+
+## Supported scenarios with cross-cloud Azure AD guest users
+
+The following scenarios are supported when collaborating with an organization from a different Microsoft cloud:
+
+- Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files.
+- Use B2B collaboration to [share Power BI content to a user in the partner tenant](/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).
+- Apply Conditional Access policies to the B2B collaboration user and opt to trust multi-factor authentication or device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
+ ## Next steps See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
You can configure organization-specific settings by adding an organization and m
Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds: -- Microsoft Azure global cloud and Microsoft Azure Government-- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+- Microsoft Azure commercial cloud and Microsoft Azure Government
+- Microsoft Azure commercial cloud and Microsoft Azure China (operated by 21Vianet)
> [!NOTE] > Microsoft Azure Government includes the Office GCC-High and DoD clouds.
To set up B2B collaboration, both organizations configure their Microsoft cloud
- Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files. - Use B2B collaboration to [share Power BI content to a user in the partner tenant](/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).-- Apply Conditional Access policies to the B2B collaboration user and opt to trust device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
+- Apply Conditional Access policies to the B2B collaboration user and opt to trust multi-factor authentication or device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
> [!NOTE] > B2B direct connect is not supported for collaboration with Azure AD tenants in a different Microsoft cloud.
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Title: Leave an organization as a guest user - Azure Active Directory
+ Title: Leave an organization - Azure Active Directory
+ description: Shows how an Azure AD B2B guest user can leave an organization by using the Access Panel.
adobe-target: true
# Leave an organization as an external user
-As an Azure Active Directory (Azure AD) B2B collaboration or B2B direct connect user, you can decide to leave an organization at any time if you no longer need to use apps from that organization or maintain any association.
+As an Azure Active Directory (Azure AD) [B2B collaboration](/articles/active-directory/external-identities/what-is-b2b.md) or [B2B direct connect](/articles/active-directory/external-identities/b2b-direct-connect-overview.md) user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
-You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization.
+You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-dsr-and-stp-note.md)] ## What organizations do I belong to?
-1. To view the organizations you belong to, first open your **My Account** page by doing one of the following:
+1. To view the organizations you belong to, first open your **My Account** page. You either have a work or school account created by an organization or a personal account such as for Xbox, Hotmail, or Outlook.com.
- If you're using a work or school account, go to https://myaccount.microsoft.com and sign in. - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com or https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
You can usually leave an organization on your own without having to contact an a
![Screenshot showing the list of organizations you belong to.](media/leave-the-organization/organization-list.png)
- - **Home organization**: Your home organization is listed first. This is the organization that owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization (you'll see there's no option to **Leave**). If you don't have an assigned home organization, you'll just see a single heading that says **Organizations** with the list of your associated organizations.
+ - **Home organization**: Your home organization is listed first. This organization owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization. You'll see there's no link to **Leave**. If you don't have an assigned home organization, you'll just see a single heading that says **Organizations** with the list of your associated organizations.
- **Other organizations you collaborate with**: You'll also see the other organizations that you've signed in to previously using your work or school account. You can decide to leave any of these organizations at any time.
If your organization allows users to remove themselves from external organizatio
![Screenshot showing Leave organization option in the user interface.](media/leave-the-organization/leave-org.png) 1. When asked to confirm, select **Leave**.
-1. If you select **Leave** for an organization but you see the following message, it means youΓÇÖll need to contact the organization's admin or privacy contact and ask them to remove you from their organization.
+1. If you select **Leave** for an organization but you see the following message, it means youΓÇÖll need to contact the organization's admin, or privacy contact and ask them to remove you from their organization.
![Screenshot showing the message when you need permission to leave an organization.](media/leave-the-organization/need-permission-leave.png) ## Why canΓÇÖt I leave an organization?
-In the **Home organization** section, there's no option to **Leave** your organization. Only an administrator can remove your account from your home organization.
+In the **Home organization** section, there's no link to **Leave** your organization. Only an administrator can remove your account from your home organization.
For the external organizations listed under **Other organizations you collaborate with**, you might not be able to leave on your own, for example when:
In these cases, you can select **Leave**, but then you'll see a message saying y
## More information for administrators
-Administrators can use the **External user leave settings** to control whether external users can remove themselves from their organization. If you disallow the ability for external users to remove themselves from your organization, external users will need to contact your admin or privacy contact to be removed.
+Administrators can use the **External user leave settings** to control whether external users can remove themselves from their organization. If you disallow the ability for external users to remove themselves from your organization, external users will need to contact your admin, or privacy contact to be removed.
> [!IMPORTANT] > You can configure **External user leave settings** only if you have [added your privacy information](../fundamentals/active-directory-properties-area.md) to your Azure AD tenant. Otherwise, this setting will be unavailable. We recommend adding your privacy information to allow external users to review your policies and email your privacy contact when necessary.
Administrators can use the **External user leave settings** to control whether e
1. Under **External user leave** settings, choose whether to allow external users to leave your organization themselves: - **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact.
- - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin or privacy contact to request removal from your organization.
+ - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin, or privacy contact to request removal from your organization.
- ![Screenshot showing External user leave settings in the portal.](media/leave-the-organization/external-user-leave-settings.png)
+
+ :::image type="content" source="media/leave-the-organization/external-user-leave-settings.png" alt-text="Screenshot showing External user leave settings in the portal.":::
### Account removal
If desired, a tenant administrator can permanently delete the account at any tim
1. Select the check box next to a deleted user, and then select **Delete permanently**.
-Once permanent deletion begins, whether it's initiated by the admin or the end of the soft deletion period, it can take up to an additional 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
+Permanent deletion can be initiated by the admin, or it happens at the end of the soft deletion period. Permanent deletion can take up to an extra 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
> [!NOTE] > For B2B direct connect users, data removal begins as soon as the user selects **Leave** in the confirmation message and can take up to 30 days to complete ([learn more](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant)). ## Next steps
-Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)
+- Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)
+- [Close your Microsoft account](/microsoft-365/commerce/close-your-account)
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
When you add a guest user to your directory, the guest user account has a consen
## Redemption and sign-in through a common endpoint
-Guest users can now sign in to your multi-tenant or Microsoft first-party apps through a common endpoint (URL), for example `https://myapps.microsoft.com`. Previously, a common URL would redirect a guest user to their home tenant instead of your resource tenant for authentication, so a tenant-specific link was required (for example `https://myapps.microsoft.com/?tenantid=<tenant id>`). Now the guest user can go to the application's common URL, choose **Sign-in options**, and then select **Sign in to an organization**. The user then types the name of your organization.
+Guest users can now sign in to your multi-tenant or Microsoft first-party apps through a common endpoint (URL), for example `https://myapps.microsoft.com`. Previously, a common URL would redirect a guest user to their home tenant instead of your resource tenant for authentication, so a tenant-specific link was required (for example `https://myapps.microsoft.com/?tenantid=<tenant id>`). Now the guest user can go to the application's common URL, choose **Sign-in options**, and then select **Sign in to an organization**. The user then types the domain name of your organization.
![Screenshots showing common endpoints used for signing in.](media/redemption-experience/common-endpoint-flow-small.png)
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
When a requirement exists to deploy IaaS workloads to Azure that require identit
![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-with-azure-ad-resource-management/sign-into-vm.png)
-**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux).
**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine. >[!NOTE] >The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
-**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) for more information.
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux) for more information.
**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure AD Portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
For this isolated model, it's assumed that there's no connectivity to the VNet t
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
The set of default permissions depends on whether the user is a native member of
| | - | - Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Read all public properties of users and contacts</li><li>Invite guests<li>Change their own password<li>Manage their own mobile phone number<li>Manage their own photo<li>Invalidate their own refresh tokens</li></ul> | <ul><li>Read their own properties<li>Read display name, email, sign-in name, photo, user principal name, and user type properties of other users and contacts<li>Change their own password<li>Search for another user by object ID (if allowed)<li>Read manager and direct report information of other users</li></ul> | <ul><li>Read their own properties<li>Change their own password</li><li>Manage their own mobile phone number</li></ul> Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul>
-Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications</li></ul> | <ul><li>Read properties of registered and enterprise applications</li></ul> | <ul><li>Read properties of registered and enterprise applications
+Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>List permissions granted to applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul>
Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
|employeeLeaveDateTime|DateTimeOffset|Yes|Not currently|Not currently| > [!NOTE]
-> To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually for cloud-only users. For more information, see: [Set employeeLeaveDateTime](set-employee-leave-date-time.md)
+> To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually for cloud-only users. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes.
active-directory Set Employee Leave Date Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/set-employee-leave-date-time.md
- Title: Set employeeLeaveDateTime
-description: Explains how to manually set employeeLeaveDateTime.
---- Previously updated : 09/07/2022---
-# Set employeeLeaveDateTime
-
-This article describes how to manually set the employeeLeaveDateTime attribute for a user. This attribute can be set as a trigger for leaver workflows created using Lifecycle Workflows.
-
-## Required permission and roles
-
-To set the employeeLeaveDateTime attribute, you must make sure the correct delegated roles and application permissions are set. They are as follows:
-
-### Delegated
-
-In delegated scenarios, the signed-in user needs the Global Administrator role to update the employeeLeaveDateTime attribute. One of the following delegated permissions is also required:
-- User-LifeCycleInfo.ReadWrite.All-- Directory.AccessAsUser.All-
-### Application
-
-Updating the employeeLeaveDateTime requires the User-LifeCycleInfo.ReadWrite.All application permission.
-
-## Set employeeLeaveDateTime via PowerShell
-To set the employeeLeaveDateTime for a user using PowerShell enter the following information:
-
- ```powershell
- Connect-MgGraph -Scopes "User-LifeCycleInfo.ReadWrite.All"
- Select-MgProfile -Name "beta"
-
- $UserId = "<Object ID of the user>"
- $employeeLeaveDateTime = "<Leave date>"
-
- $Body = '{"employeeLeaveDateTime": "' + $employeeLeaveDateTime + '"}'
- Update-MgUser -UserId $UserId -BodyParameter $Body
-
- $User = Get-MgUser -UserId $UserId -Property employeeLeaveDateTime
- $User.AdditionalProperties
- ```
-
- This script is an example of a user who will leave on September 30, 2022 at 23:59.
-
- ```powershell
- Connect-MgGraph -Scopes "User-LifeCycleInfo.ReadWrite.All"
- Select-MgProfile -Name "beta"
-
- $UserId = "528492ea-779a-4b59-b9a3-b3773ef6da6d"
- $employeeLeaveDateTime = "2022-09-30T23:59:59Z"
-
- $Body = '{"employeeLeaveDateTime": "' + $employeeLeaveDateTime + '"}'
- Update-MgUser -UserId $UserId -BodyParameter $Body
-
- $User = Get-MgUser -UserId $UserId -Property employeeLeaveDateTime
- $User.AdditionalProperties
-```
--
-## Next steps
--- [How to synchronize attributes for Lifecycle workflows](how-to-lifecycle-workflow-sync-attributes.md)-- [Lifecycle Workflows templates](lifecycle-workflow-templates.md)
active-directory Tutorial Offboard Custom Workflow Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-graph.md
- Title: 'Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)'
-description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview).
------- Previously updated : 08/18/2022----
-# Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)
-
-This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the GRAPH API.
-
-This off-boarding scenario will run a workflow on-demand and accomplish the following tasks:
-
-1. Remove user from all groups
-2. Remove user from all Teams
-3. Delete user account
-
-You may learn more about running a workflow on-demand [here](on-demand-workflow.md).
-
-## Before you begin
-
-As part of the prerequisites for completing this tutorial, you will need an account that has group and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
-
-The leaver scenario can be broken down into the following:
-- **Prerequisite:** Create a user account that represents an employee leaving your organization-- **Prerequisite:** Prepare the user account with groups and Teams memberships-- Create the lifecycle management workflow-- Run the workflow on-demand-- Verify that the workflow was successfully executed--
-## Create a leaver workflow on-demand using Graph API
-
-Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: <br><br>A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. <br><br>The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
-
-For the purpose of this tutorial, there are three tasks that will be introduced in this workflow:
-
-### Remove user from all groups task
-
-```Example
-"tasks":[
- {
- "continueOnError": true,
- "displayName": "Remove user from all groups",
- "description": "Remove user from all Azure AD groups memberships",
- "isEnabled": true,
- "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
- "arguments": []
- }
- ]
-```
-
-> [!NOTE]
-> The task does not support removing users from Privileged Access Groups, Dynamic Groups, and synchronized Groups.
-
-### Remove user from all Teams task
-
-```Example
-"tasks":[
- {
- "continueOnError": true,
- "description": "Remove user from all Teams",
- "displayName": "Remove user from all Teams memberships",
- "isEnabled": true,
- "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "arguments": []
- }
- ]
-```
-### Delete user task
-
-```Example
-"tasks":[
- {
- "continueOnError": true,
- "displayName": "Delete user account",
- "description": "Delete user account in Azure AD",
- "isEnabled": true,
- "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "arguments": []
- }
- ]
-```
-### Leaver workflow on-demand
-
-The following POST API call will create a leaver workflow that can be executed on-demand for real-time employee terminations.
-
- ```http
-POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
-Content-type: application/json
-
-{
- "category": "Leaver",
- "displayName": "Real-time employee termination",
- "description": "Execute real-time termination tasks for employees on their last day of work",
- "isEnabled": true,
- "isSchedulingEnabled": false,
- "executionConditions":{
- "@odata.type":"#microsoft.graph.identityGovernance.onDemandExecutionOnly"
- },
- "tasks": [
- {
- "continueOnError": false,
- "description": "Remove user from all Azure AD groups memberships",
- "displayName": "Remove user from all groups",
- "executionSequence": 1,
- "isEnabled": true,
- "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
- "arguments": []
- },
- {
- "continueOnError": false,
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "executionSequence": 2,
- "isEnabled": true,
- "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "arguments": []
- },
- {
- "continueOnError": false,
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "executionSequence": 3,
- "isEnabled": true,
- "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "arguments": []
- }
- ]
-}
-```
-
-## Run the workflow
-
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
-
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-
-To run a workflow on-demand for users using the GRAPH API do the following steps:
-
-1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
- 3. Copy the code below in to the **Request body**
- 4. Replace `<userid>` in the code below with the value of the user's ID.
- 5. Select **Run query**
- ```json
- {
- "subjects":[
- {"id":"<userid>"}
-
- ]
-}
-
-```
-
-## Check tasks and workflow status
-
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
-
-To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
-
-This example will show you how to list the userProcessingResults for the last 7 days.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults
-```
-Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
-```
-
-## Next steps
-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md)
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
At any time, you may monitor the status of the workflows and the tasks. As a rem
## Next steps - [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md)
+- [Complete employee offboarding tasks in real-time on their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)
active-directory Tutorial Onboard Custom Workflow Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-graph.md
- Title: 'Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)'
-description: Tutorial for onboarding users to an organization using Lifecycle workflows with Microsoft Graph (preview).
------- Previously updated : 08/18/2022----
-# Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)
-
-This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the GRAPH API.
-
-This pre-hire scenario will generate a temporary password for our new employee and send it via email to the user's new manager.
-
-## Before you begin
-
-Two accounts are required for the tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set:
-- employeeHireDate must be set to today-- department must be set to sales-- manager attribute must be set, and the manager account should have a mailbox to receive an email.-
-For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial.
-
-Detailed breakdown of the relevant attributes:
-
- | Attribute | Description |Set on|
- |: |::|--|
- |mail|Used to notify manager of the new employees temporary access pass|Both|
- |manager|This attribute that is used by the lifecycle workflow|Employee|
- |employeeHireDate|Used to trigger the workflow|Both|
- |department|Used to provide the scope for the workflow|Both|
-
-The pre-hire scenario can be broken down into the following:
- - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager
- - **Prerequisite:** Edit the manager attribute for this scenario using Microsoft Graph Explorer
- - **Prerequisite:** Enabling and using Temporary Access Pass (TAP)
- - Creating the lifecycle management workflow
- - Triggering the workflow
- - Verifying the workflow was successfully executed
-
-## Create a pre-hire workflow using Graph API
-
-Now that the pre-hire workflow attributes have been updated and correctly populated, a pre-hire workflow can then be created to generate a Temporary Access Pass (TAP) and send it via email to a user's manager. Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: <br><br> A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>a scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
-
-The following POST API call will create a pre-hire workflow that will generate a TAP and send it via email to the user's manager.
-
- ```http
- POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
-Content-type: application/json
-
-{
- "displayName":"Onboard pre-hire employee",
- "description":"Configure pre-hire tasks for onboarding employees before their first day",
- "isEnabled":true,
- "isSchedulingEnabled": false,
- "executionConditions": {
- "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
- "scope": {
- "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
- "rule": "(department eq 'sales')"
- },
- "trigger": {
- "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute": "employeeHireDate",
- "offsetInDays": -2
- }
- },
- "tasks":[
- {
- "isEnabled":true,
- "category": "Joiner",
- "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d",
- "displayName":"Generate TAP And Send Email",
- "description":"Generate Temporary Access Pass and send via email to user's manager",
- "arguments":[
- {
- "name": "tapLifetimeMinutes",
- "value": "480"
- },
- {
- "name": "tapIsUsableOnce",
- "value": "true"
- }
- ]
- }
- ]
-}
-```
-
-## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
-
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-
-To run a workflow on-demand for users using the GRAPH API do the following steps:
-
-1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
- 3. Copy the code below in to the **Request body**
- 4. Replace `<userid>` in the code below with the value of the user's ID.
- 5. Select **Run query**
- ```json
- {
- "subjects":[
- {"id":"<userid>"}
-
- ]
-}
-
-```
-
-## Check tasks and workflow status
-
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
-
-To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
-
-This example will show you how to list the userProcessingResults for the last 7 days.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults
-```
-Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
-```
-
-## Enable the workflow schedule
-
-After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call.
-
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-Content-type: application/json
-
-{
- "displayName":"Onboard pre-hire employee",
- "description":"Configure pre-hire tasks for onboarding employees before their first day",
- "isEnabled": true,
- "isSchedulingEnabled": true
-}
-
-```
-
-## Next steps
-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md)
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
After running your workflow on-demand and checking that everything is working fi
## Next steps - [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md)
+- [Automate employee onboarding tasks before their first day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-onboard-custom-workflow)
active-directory Tutorial Scheduled Leaver Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-graph.md
- Title: Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)
-description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview).
------- Previously updated : 08/18/2022----
-# Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)
-
-This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the GRAPH API.
-
-This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks:
-
-1. Remove all licenses for user
-2. Remove user from all Teams
-3. Delete user account
-
-## Before you begin
-
-As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
-
-The scheduled leaver scenario can be broken down into the following:
-- **Prerequisite:** Create a user account that represents an employee leaving your organization-- **Prerequisite:** Prepare the user account with licenses and Teams memberships-- Create the lifecycle management workflow-- Run the scheduled workflow after last day of work-- Verify that the workflow was successfully executed-
-## Create a scheduled leaver workflow using Graph API
-
-Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnabled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: <br><br>a time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
-
-For the purpose of this tutorial, there are three tasks that will be introduced in this workflow:
-
-### Remove all licenses for user
-
-```Example
-"tasks":[
- {
- "category": "leaver",
- "description": "Remove all licenses assigned to the user",
- "displayName": "Remove all licenses for user",
- "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
- "version": 1,
- "parameters": []
- }
- ]
-```
-### Remove user from all Teams task
-
-```Example
-"tasks":[
- {
- "category": "leaver",
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "version": 1,
- "parameters": []
- }
- ]
-```
-### Delete user account
-
-```Example
-"tasks":[
- {
- "category": "leaver",
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "version": 1,
- "parameters": []
- }
- ]
-```
-### Scheduled leaver workflow
-
-The following POST API call will create a scheduled leaver workflow to configure off-boarding tasks for employees after their last day of work.
-
-```http
-POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
-Content-type: application/json
-
-{
- "category": "leaver",
- "displayName": "Post-Offboarding of an employee",
- "description": "Configure offboarding tasks for employees after their last day of work",
- "isEnabled": true,
- "isSchedulingEnabled": false,
- "executionConditions": {
- "@odata.type": "#microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
- "scope": {
- "@odata.type": "#microsoft.graph.identityGovernance.ruleBasedSubjectSet",
- "rule": "department eq 'Marketing'"
- },
- "trigger": {
- "@odata.type": "#microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute": "employeeLeaveDateTime",
- "offsetInDays": 7
- }
- },
- "tasks": [
- {
- "category": "leaver",
- "continueOnError": false,
- "description": "Remove all licenses assigned to the user",
- "displayName": "Remove all licenses for user",
- "executionSequence": 1,
- "isEnabled": true,
- "taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
- "arguments": []
- },
- {
- "category": "leaver",
- "continueOnError": false,
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "executionSequence": 2,
- "isEnabled": true,
- "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "arguments": []
- },
- {
- "category": "leaver",
- "continueOnError": false,
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "executionSequence": 3,
- "isEnabled": true,
- "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "arguments": []
- }
- ]
-}
-```
-
-## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
-
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-
-To run a workflow on-demand for users using the GRAPH API do the following steps:
-
-1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
- 3. Copy the code below in to the **Request body**
- 4. Replace `<userid>` in the code below with the value of the user's ID.
- 5. Select **Run query**
- ```json
- {
- "subjects":[
- {"id":"<userid>"}
-
- ]
-}
-
-```
-
-## Check tasks and workflow status
-
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
-
-To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
-
-This example will show you how to list the userProcessingResults for the last 7 days.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults
-```
-Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
-```
-## Enable the workflow schedule
-
-After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call.
-
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-Content-type: application/json
-
-{
- "displayName":"Post-Offboarding of an employee",
- "description":"Configure offboarding tasks for employees after their last day of work",
- "isEnabled": true,
- "isSchedulingEnabled": true
-}
-
-```
-
-## Next steps
-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee offboarding tasks after their last day of work with Azure portal (preview)](tutorial-scheduled-leaver-portal.md)
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
After running your workflow on-demand and checking that everything is working fi
## Next steps - [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)](tutorial-scheduled-leaver-graph.md)
+- [Automate employee offboarding tasks after their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-scheduled-leaver)
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
To configure OIDC-based SSO for an application:
:::image type="content" source="media/add-application-portal-setup-oidc-sso/oidc-sso-configuration.png" alt-text="Complete the consent screen for an application.":::
-1. Select **Consent on behalf of your organization** and then select **Accept**. The application is added to your tenant and the application home page appears. To learn more about user and admin consent, see [Understand user and admin consent](../develop/howto-convert-app-to-be-multi-tenant.md#understand-user-and-admin-consent).
+1. Select **Consent on behalf of your organization** and then select **Accept**. The application is added to your tenant and the application home page appears. To learn more about user and admin consent, see [Understand user and admin consent](../develop/howto-convert-app-to-be-multi-tenant.md#understand-user-and-admin-consent-and-make-appropriate-code-changes).
## Next steps
active-directory Powershell Export All App Registrations Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-app-registrations-secrets-and-certs.md
Title: PowerShell sample - Export secrets and certificates for app registrations in Azure Active Directory tenant. description: PowerShell example that exports all secrets and certificates for the specified app registrations in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Powershell Export All Enterprise Apps Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md
Title: PowerShell sample - Export secrets and certificates for enterprise apps in Azure Active Directory tenant. description: PowerShell example that exports all secrets and certificates for the specified enterprise apps in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Powershell Export Apps With Expriring Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-expriring-secrets.md
Title: PowerShell sample - Export apps with expiring secrets and certificates in Azure Active Directory tenant. description: PowerShell example that exports all apps with expiring secrets and certificates for the specified apps in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
Title: PowerShell sample - Export apps with secrets and certificates expiring beyond the required date in Azure Active Directory tenant. description: PowerShell example that exports all apps with secrets and certificates expiring beyond the required date for the specified apps in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
Previously updated : 10/07/2021 Last updated : 10/20/2022
The need for access to privileged Azure resource and Azure AD roles by employees
To create access reviews for Azure resources, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role for the Azure resources. To create access reviews for Azure AD roles, you must be assigned to the [Global Administrator](../roles/permissions-reference.md#global-administrator) or the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-> [!Note]
-> In public preview, you can scope an access review to service principals with access to Azure AD and Azure resource roles with an Azure Active Directory Premium P2 edition active in your tenant. After general availability, additional licenses might be required.
- ## Create access reviews 1. Sign in to [Azure portal](https://portal.azure.com/) as a user that is assigned to one of the prerequisite role(s).
The need for access to privileged Azure resource and Azure AD roles by employees
3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in Azure Portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Azure portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage.
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
This section answers frequently asked questions and discusses known issues with
**Q: What SIEM tools are currently supported?**
-**A**: Currently, Azure Monitor is supported by [Splunk](./howto-integrate-activity-logs-with-splunk.md), IBM QRadar, [Sumo Logic](https://help.sumologic.com/Send-Dat).
+**A**: Currently, Azure Monitor is supported by [Splunk](./howto-integrate-activity-logs-with-splunk.md), IBM QRadar, [Sumo Logic](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure/), [ArcSight](./howto-integrate-activity-logs-with-arcsight.md), LogRhythm, and Logz.io. For more information about how the connectors work, see [Stream Azure monitoring data to an event hub for consumption by an external tool](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
This section answers frequently asked questions and discusses known issues with
**Q: How do I integrate Azure AD activity logs with Sumo Logic?**
-**A**: First, [route the Azure AD activity logs to an event hub](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Collect_Logs_for_Azure_Active_Directory), then follow the steps to [Install the Azure AD application and view the dashboards in SumoLogic](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards).
+**A**: First, [route the Azure AD activity logs to an event hub](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory), then follow the steps to [Install the Azure AD application and view the dashboards in SumoLogic](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards).
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
To use this feature, you need:
## Steps to integrate Azure AD logs with SumoLogic 1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Collect_Logs_for_Azure_Active_Directory).
-3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
+2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
+3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
![Dashboard](./media/howto-integrate-activity-logs-with-sumologic/overview-dashboard.png)
To use this feature, you need:
* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) * [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
-* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions)
+* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions)
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
After data is displayed in the event hub, you can access and read the data in tw
* **IBM QRadar**: The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
- * **Sumo Logic**: To set up Sumo Logic to consume data from an event hub, see [Install the Azure AD app and view the dashboards](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards).
+ * **Sumo Logic**: To set up Sumo Logic to consume data from an event hub, see [Install the Azure AD app and view the dashboards](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards).
* **Set up custom tooling**. If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up custom tooling by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md).
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Single Sign-On](./media/atlassian-cloud-tutorial/configure.png)
- b. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity Provider Entity ID** textbox in Atlassian.
+ b. Copy **Login URL** value from Azure portal, paste it in the **Identity Provider SSO URL** textbox in Atlassian.
- c. Copy **Login URL** value from Azure portal, paste it in the **Identity Provider SSO URL** textbox in Atlassian.
+ c. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity Provider Entity ID** textbox in Atlassian.
![Identity Provider SSO URL](./media/atlassian-cloud-tutorial/configuration-azure.png)
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
US executive order [14028, Improving the Nation's Cyber Security](https://www.wh
This series of articles offers guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in memorandum 22-09.
-The release of memorandum 22-09 is designed to support Zero Trust initiatives within federal agencies. It also provides regulatory guidance in supporting federal cybersecurity and data privacy laws. The memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf):
+The release of memorandum 22-09 is designed to support Zero Trust initiatives within federal agencies. It also provides regulatory guidance in supporting federal cybersecurity and data privacy laws. The memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://cloudsecurityalliance.org/artifacts/dod-zero-trust-reference-architecture/):
>"The foundational tenet of the Zero Trust Model is that no actor, system, network, or service operating outside or within the security perimeter is trusted. Instead, we must verify anything and everything attempting to establish access. It is a dramatic paradigm shift in philosophy of how we secure our infrastructure, networks, and data, from verify once at the perimeter to continual verification of each user, device, application, and transaction."
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
The Secrets Store CSI Driver allows for the following methods to access an Azure
Follow the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods] for your chosen method.
+> [!NOTE]
+> The rest of the examples on this page require that you've followed the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods], chosen one of the identity methods, and configured a SecretProviderClass. Come back to this page after completed those steps.
+ ## Validate the secrets After the pod starts, the mounted content at the volume path that you specified in your deployment YAML is available.
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
To learn more about AKS, and walk through a complete code to deployment example,
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
To learn more about AKS, and walk through a complete code to deployment example,
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
+
+ Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
+description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.
++ Last updated : 10/20/2022+++
+#Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
++
+# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
+
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+
+This article shows you how to create a connection to an AKS node.
+
+## Before you begin
+
+This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. If you use PuTTY Gen to create the key pair, save the key pair in an OpenSSH format rather than the default PuTTy private key format (.ppk file).
+
+You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Create an interactive shell connection to a Linux node
+
+To create an interactive shell connection to a Linux node, use the `kubectl debug` command to run a privileged container on your node. To list your nodes, use the `kubectl get nodes` command:
+
+```bash
+kubectl get nodes -o wide
+```
+
+The following example resembles output from the command:
+
+```output
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
+```
+
+Us the `kubectl debug` command to run a container image on the node to connect to it.
+
+```bash
+kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+```
+
+The following command starts a privileged container on your node and connects to it.
+
+```bash
+kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+```
+
+The following example resembles output from the command:
+
+```output
+Creating debugging pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx with container debugger on node aks-nodepool1-12345678-vmss000000.
+If you don't see a command prompt, try pressing enter.
+root@aks-nodepool1-12345678-vmss000000:/#
+```
+
+This privileged container gives access to the node.
+
+> [!NOTE]
+> You can interact with the node session by running `chroot /host` from the privileged container.
+
+### Remove Linux node access
+
+When done, `exit` the interactive shell session. After the interactive container session closes, delete the pod used for access with `kubectl delete pod`.
+
+```bash
+kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
+```
+
+## Create the SSH connection to a Windows node
+
+At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
+
+To connect to another node in the cluster, use the `kubectl debug` command. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
+
+To create the SSH connection to the Windows Server node from another node, use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
+
+Open a new terminal window and use the `kubectl get pods` command to get the name of the pod started by `kubectl debug`.
+
+```bash
+kubectl get pods
+```
+
+The following example resembles output from the command:
+
+```output
+NAME READY STATUS RESTARTS AGE
+node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 1/1 Running 0 21s
+```
+
+In the above example, *node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
+
+Use the `kubectl port-forward` command to open a connection to the deployed pod:
+
+```bash
+kubectl port-forward node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 2022:22
+```
+
+The following example resembles output from the command:
+
+```output
+Forwarding from 127.0.0.1:2022 -> 22
+Forwarding from [::1]:2022 -> 22
+```
+
+The above example begins forwarding network traffic from port 2022 on your development computer to port 22 on the deployed pod. When using `kubectl port-forward` to open a connection and forward network traffic, the connection remains open until you stop the `kubectl port-forward` command.
+
+Open a new terminal and run the command `kubectl get nodes` to show the internal IP address of the Windows Server node:
+
+```bash
+kubectl get nodes -o wide
+```
+
+The following example resembles output from the command:
+
+```output
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
+```
+
+In the above example, *10.240.0.67* is the internal IP address of the Windows Server node.
+
+Create an SSH connection to the Windows Server node using the internal IP address, and connect to port 22 through port 2022 on your development computer. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You are then provided with the bash prompt of your Windows Server node:
+
+```bash
+ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.240.0.67
+```
+
+The following example resembles output from the command:
+
+```output
+The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
+ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
+Are you sure you want to continue connecting (yes/no)? yes
+
+[...]
+
+Microsoft Windows [Version 10.0.17763.1935]
+(c) 2018 Microsoft Corporation. All rights reserved.
+
+azureuser@aksnpwin000000 C:\Users\azureuser>
+```
+
+> [!NOTE]
+> If you prefer to use password authentication, include the parameter `-o PreferredAuthentications=password`. For example:
+>
+> ```console
+> ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' -o PreferredAuthentications=password azureuser@10.240.0.67
+> ```
+
+### Remove SSH access
+
+When done, `exit` the SSH session, stop any port forwarding, and then `exit` the interactive container session. After the interactive container session closes, delete the pod used for SSH access using the `kubectl delete pod` command.
+
+```bash
+kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
+```
+
+## Next steps
+
+If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+
+<!-- INTERNAL LINKS -->
+[view-kubelet-logs]: kubelet-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
+[install-azure-cli]: /cli/azure/install-azure-cli
+[aks-windows-rdp]: rdp.md
+[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md
+[ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
+[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
<!-- LINKS - external --> [kured]: https://github.com/weaveworks/kured
-[kured-install]: https://github.com/weaveworks/kured/tree/master/charts/kured
+[kured-install]: https://github.com/kubereboot/kured/tree/main/cmd/kured
[kubectl-get-nodes]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal -->
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
If you need more troubleshooting data, you can [view the Kubernetes primary node
[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md [aks-quickstart-windows-powershell]: ./learn/quick-windows-container-deploy-powershell.md [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-vm-delete]: /cli/azure/vm#az_vm_delete
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Advance to the next tutorial to learn how to deploy an application to the cluste
[quotas-skus-regions]: quotas-skus-regions.md [azure-powershell-install]: /powershell/azure/install-az-ps [new-azakscluster]: /powershell/module/az.aks/new-azakscluster
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
In this article, you deployed a Kubernetes cluster and configured it to use a wo
[az-feature-register]: /cli/azure/feature#az_feature_register [workload-identity-overview]: workload-identity-overview.md [create-key-vault-azure-cli]: ../key-vault/general/quick-create-cli.md
-[az-keyvault-list]: /cli/azure/keyvaultt#az-keyvault-list
+[az-keyvault-list]: /cli/azure/keyvault#az-keyvault-list
[aks-identity-concepts]: concepts-identity.md [az-account]: /cli/azure/account [az-aks-create]: /cli/azure/aks#az-aks-create
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service
description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 09/29/2022 Last updated : 10/20/2022
This article helps you understand this new authentication feature, and reviews t
## Dependencies -- AKS supports Azure AD workload identities on version 1.24 and higher.
+- AKS supports Azure AD workload identities on version 1.22 and higher.
- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
The following table summarizes our migration or deployment recommendations for w
[dotnet-azure-identity-client-library]: /dotnet/api/overview/azure/identity-readme [java-azure-identity-client-library]: /java/api/overview/azure/identity-readme [javascript-azure-identity-client-library]: /javascript/api/overview/azure/identity-readme
-[python-azure-identity-client-library]: /python/api/overview/azure/identity-readme
+[python-azure-identity-client-library]: /python/api/overview/azure/identity-readme
analysis-services Analysis Services Addservprinc Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md
Title: Add service principal to Azure Analysis Services admin role | Microsoft Docs description: Learn how to add an automation service principal to the Azure Analysis Services server admin role -+ Last updated 05/14/2021
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-async-refresh.md
Title: Asynchronous refresh for Azure Analysis Services models | Microsoft Docs description: Describes how to use the Azure Analysis Services REST API to code asynchronous refresh of model data. -+ Last updated 02/02/2022
analysis-services Analysis Services Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-backup.md
Title: Azure Analysis Services database backup and restore | Microsoft Docs description: This article describes how to backup and restore model metadata and data from an Azure Analysis Services database. -+ Last updated 03/29/2021
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-bcdr.md
Title: Azure Analysis Services high availability | Microsoft Docs description: This article describes how Azure Analysis Services provides high availability during service disruption. -+ Last updated 02/02/2022
analysis-services Analysis Services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-capacity-limits.md
Title: Azure Analysis Services resource and object limits | Microsoft Docs description: This article describes resource and object limits for an Azure Analysis Services server. -+ Last updated 03/29/2021
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md
Title: Connect to Azure Analysis Services with Excel | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Excel. Once connected, users can create PivotTables to explore data. -+ Last updated 05/16/2022
analysis-services Analysis Services Connect Pbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-pbi.md
Title: Connect to Azure Analysis Services with Power BI | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Power BI. Once connected, users can explore model data. -+ Last updated 06/30/2021
analysis-services Analysis Services Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect.md
Title: Connecting to Azure Analysis Services servers| Microsoft Docs description: Learn how to connect to and get data from an Analysis Services server in Azure. -+ Last updated 02/02/2022
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
Title: Quickstart - Create an Azure Analysis Services server resource by using B
description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file. Last updated 03/08/2022 -+ tags: azure-resource-manager, bicep
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
Last updated 10/12/2021 -+ #Customer intent: As a BI developer, I want to create an Azure Analysis Services server by using PowerShell.
analysis-services Analysis Services Create Sample Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-sample-model.md
Title: Tutorial - Add a sample model- Azure Analysis Services | Microsoft Docs description: In this tutorial, learn how to add a sample model in Azure Analysis Services. -+ Last updated 10/12/2021
analysis-services Analysis Services Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-server.md
Last updated 10/12/2021 -+ #Customer intent: As a BI developer, I want to create an Azure Analysis Services server by using the Azure portal.
analysis-services Analysis Services Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md
Last updated 10/12/2021 -+ tags: azure-resource-manager #Customer intent: As a BI developer who is new to Azure, I want to use Azure Analysis Services to store and manage my organizations data models.
analysis-services Analysis Services Database Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-database-users.md
Title: Manage database roles and users in Azure Analysis Services | Microsoft Docs description: Learn how to manage database roles and users on an Analysis Services server in Azure. -+ Last updated 02/02/2022
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md
Title: Data sources supported in Azure Analysis Services | Microsoft Docs description: Describes data sources and connectors supported for tabular 1200 and higher data models in Azure Analysis Services. -+ Last updated 02/02/2022
analysis-services Analysis Services Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-deploy.md
Title: Deploy a model to Azure Analysis Services by using Visual Studio | Microsoft Docs description: Learn how to deploy a tabular model to an Azure Analysis Services server by using Visual Studio. -+ Last updated 12/01/2020
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md
Title: Install On-premises data gateway for Azure Analysis Services | Microsoft Docs description: Learn how to install and configure an On-premises data gateway to connect to on-premises data sources from an Azure Analysis Services server. -+ Last updated 01/31/2022
analysis-services Analysis Services Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway.md
Title: On-premises data gateway for Azure Analysis Services | Microsoft Docs description: An On-premises gateway is necessary if your Analysis Services server in Azure will connect to on-premises data sources. -+ Last updated 02/02/2022
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
Title: Diagnostic logging for Azure Analysis Services | Microsoft Docs description: Describes how to setup up logging to monitoring your Azure Analysis Services server. -+ Last updated 04/27/2021
analysis-services Analysis Services Long Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-long-operations.md
Title: Best practices for long running operations in Azure Analysis Services | Microsoft Docs description: This article describes best practices for long running operations. -+ Last updated 04/27/2021
analysis-services Analysis Services Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage-users.md
Title: Azure Analysis Services authentication and user permissions| Microsoft Docs description: This article describes how Azure Analysis Services uses Azure Active Directory (Azure AD) for identity management and user authentication. -+ Last updated 02/02/2022
analysis-services Analysis Services Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage.md
Title: Manage Azure Analysis Services | Microsoft Docs description: This article describes the tools used to manage administration and management tasks for an Azure Analysis Services server. -+ Last updated 02/02/2022
analysis-services Analysis Services Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-monitor.md
Title: Monitor Azure Analysis Services server metrics | Microsoft Docs description: Learn how Analysis Services use Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. -+ Last updated 03/04/2020
analysis-services Analysis Services Odc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-odc.md
Title: Connect to Azure Analysis Services with an .odc file | Microsoft Docs description: Learn how to create an Office Data Connection file to connect to and get data from an Analysis Services server in Azure. -+ Last updated 04/27/2021
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Title: What is Azure Analysis Services? description: Learn about Azure Analysis Services, a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud. -+ Last updated 02/15/2022
analysis-services Analysis Services Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-powershell.md
Title: Manage Azure Analysis Services with PowerShell | Microsoft Docs description: Describes Azure Analysis Services PowerShell cmdlets for common administrative tasks such as creating servers, suspending operations, or changing service level. -+ Last updated 04/27/2021
analysis-services Analysis Services Qs Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-qs-firewall.md
Last updated 08/12/2020 -+ #Customer intent: As a BI developer, I want to secure my server by configuring a server firewall and create open IP address ranges for client computers in my organization.
analysis-services Analysis Services Refresh Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-azure-automation.md
Title: Refresh Azure Analysis Services models with Azure Automation | Microsoft Docs description: This article describes how to code model refreshes for Azure Analysis Services by using Azure Automation. -+ Last updated 12/01/2020
analysis-services Analysis Services Refresh Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-logic-app.md
Title: Refresh with Logic Apps for Azure Analysis Services models | Microsoft Docs description: This article describes how to code asynchronous refresh for Azure Analysis Services by using Azure Logic Apps. -+ Last updated 10/30/2019
analysis-services Analysis Services Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-samples.md
Title: Azure Analysis Services code, project, and database samples description: This article describes resources to learn about code, project, and database samples for Azure Analysis Services. -+ Last updated 04/27/2021
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
Title: Azure Analysis Services scale-out| Microsoft Docs description: Replicate Azure Analysis Services servers with scale-out. Client queries can then be distributed among multiple query replicas in a scale-out query pool. -+ Last updated 04/27/2021
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-admins.md
Title: Manage server admins in Azure Analysis Services | Microsoft Docs description: This article describes how to manage server administrators for an Azure Analysis Services server by using the Azure portal, PowerShell, or REST APIs. -+ Last updated 02/02/2022
analysis-services Analysis Services Server Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-alias.md
Title: Azure Analysis Services alias server names | Microsoft Docs description: Learn how to create Azure Analysis Services server name aliases. Users can then connect to your server with a shorter alias name instead of the server name. -+ Last updated 12/07/2021
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Title: Automate Azure Analysis Services tasks with service principals | Microsoft Docs description: Learn how to create a service principal for automating Azure Analysis Services administrative tasks. -+ Last updated 02/02/2022
analysis-services Analysis Services Vnet Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-vnet-gateway.md
Title: Configure Azure Analysis Services for VNet data sources | Microsoft Docs description: Learn how to configure an Azure Analysis Services server to use a gateway for data sources on Azure Virtual Network (VNet). -+ Last updated 02/02/2022
analysis-services Move Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/move-between-regions.md
Title: Move Azure Analysis Services to a different region | Microsoft Docs description: Describes how to move an Azure Analysis Services resource to a different region. -+ Last updated 12/01/2020
analysis-services Analysis Services Tutorial Pbid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-pbid.md
Title: Tutorial - Connect Azure Analysis Services with Power BI Desktop | Microsoft Docs description: In this tutorial, learn how to get an Analysis Services server name from the Azure portal and then connect to the server by using Power BI Desktop.-+ Last updated 02/02/2022
analysis-services Analysis Services Tutorial Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-roles.md
Title: Tutorial - Configure Azure Analysis Services roles | Microsoft Docs description: In this tutorial, learn how to configure Azure Analysis Services administrator and user roles by using the Azure portal or SQL Server Management Studio. -+ Last updated 10/12/2021
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
Congratulations, you're running an API in Azure App Service with CORS support.
You can use your own CORS utilities instead of App Service CORS for more flexibility. For example, you may want to specify different allowed origins for different routes or methods. Since App Service CORS lets you specify one set of accepted origins for all API routes and methods, you would want to use your own CORS code. See how ASP.NET Core does it at [Enabling Cross-Origin Requests (CORS)](/aspnet/core/security/cors).
+The built-in App Service CORS feature does not have options to allow only specific HTTP methods or verbs for each origin that you specify. It will automatically allow all methods and headers for each origin defined. This behavior is similar to [ASP.NET Core CORS](/aspnet/core/security/cors) policies when you use the options `.AllowAnyHeader()` and `.AllowAnyMethod()` in the code.
+ > [!NOTE] > Don't try to use App Service CORS and your own CORS code together. When used together, App Service CORS takes precedence and your own CORS code has no effect. >
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 10/19/2022 Last updated : 10/20/2022 # Migrate to App Service Environment v3
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
Note that multiple App Service Environments can't exist in a single subnet. If y
App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
+## Back up and restore
+
+The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
+
+> [!IMPORTANT]
+> You must configure custom backups for your apps in order to restore them to an App Service Environment v3. Automatic backup doesn't support restoration on different App Service Environment versions. For more information on custom backups, see [Automatic vs custom backups](../manage-backup.md#automatic-vs-custom-backups).
+>
+
+The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. You can select a custom backup and use that to restore the app to an App Service in your App Service Environment v3.
++
+|Benefits |Limitations |
+|||
+|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
+|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
+|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
+|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
+|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | Only supports custom backups |
+ ## Clone your app to an App Service Environment v3 [Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
Once your migration and any testing with your new environment is complete, delet
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
- The [back up and restore](../manage-backup.md) feature doesn't support restoring apps between App Service Environment versions (an app running on App Service Environment v2 can't be restored on an App Service Environment v3).
+ The [back up and restore](../manage-backup.md) feature supports restoring apps between App Service Environment versions as long as a custom backup is used for the restoration. Automatic backup doesn't support restoration to different App Service Environment versions.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
Once your migration and any testing with your new environment is complete, delet
> [Migrate to App Service Environment v3 using the migration feature](migrate.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Provision Resource Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-terraform.md
description: Create your first app to Azure App Service in seconds using a Terra
Previously updated : 8/5/2022 Last updated : 10/20/2022 ms.tool: terraform
resource "azurerm_service_plan" "appserviceplan" {
location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name os_type = "Linux"
- sku_name = "F1"
+ sku_name = "B1"
} # Create the web app, pass in the App Service Plan ID
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
To create a custom probe, follow [these steps](./application-gateway-create-prob
### HTTP response body mismatch **Message:** Body of the backend's HTTP response did not match the
-probe setting. Received response body does not contain {string}.
+probe setting. Received response body doesn't contain {string}.
**Cause:** When you create a custom probe, you can mark a backend server as Healthy by matching a string from the response body. For example, you can configure Application Gateway to accept "unauthorized" as a string to match. If the backend server response for the probe request contains the string **unauthorized**, it will be marked as Healthy. Otherwise, it will be marked as Unhealthy with this message.
For more information about how to extract and upload Trusted Root Certificates i
### Trusted root certificate mismatch
-**Message:** The root certificate of the server certificate used by the backend does not match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend.
+**Message:** The root certificate of the server certificate used by the backend doesn't match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend.
**Cause:** End-to-end SSL with Application Gateway v2 requires the backend server's certificate to be verified in order to deem the server Healthy. For a TLS/SSL certificate to be trusted, the backend server certificate must be issued by a CA that's included in the trusted store of Application Gateway. If the certificate wasn't issued by a trusted CA (for example, a self-signed certificate was used), users should upload the issuer's certificate to Application Gateway.
If the output doesn't show the complete chain of the certificate being returned,
### Backend certificate invalid common name (CN)
-**Message:** The Common Name (CN) of the backend certificate does not match the host header of the probe.
+**Message:** The Common Name (CN) of the backend certificate doesn't match the host header of the probe.
**Cause:** Application Gateway checks whether the host name specified in the backend HTTP settings matches that of the CN presented by the backend serverΓÇÖs TLS/SSL certificate. This verification is Standard_v2 and WAF_v2 SKU (V2) behavior. The Standard and WAF SKU (v1) Server Name Indication (SNI) is set as the FQDN in the backend pool address. For more information on SNI behavior and differences between v1 and v2 SKU, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md).
This behavior can occur for one or more of the following reasons:
3. Default route advertised by the ExpressRoute/VPN connection to the virtual network over BGP:
- a. If you have an ExpressRoute/VPN connection to the virtual network over BGP, and if you are advertising a default route, you must make sure that the packet is routed back to the internet destination without modifying it. You can verify by using the **Connection Troubleshoot** option in the Application Gateway portal.
+ a. If you have an ExpressRoute/VPN connection to the virtual network over BGP, and if you're advertising a default route, you must make sure that the packet is routed back to the internet destination without modifying it. You can verify by using the **Connection Troubleshoot** option in the Application Gateway portal.
b. Choose the destination manually as any internet-routable IP address like 1.1.1.1. Set the destination port as anything, and verify the connectivity. c. If the next hop is virtual network gateway, there might be a default route advertised over ExpressRoute or VPN.
application-gateway Application Gateway Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-components.md
Title: Application gateway components description: This article provides information about the various components in an application gateway -+ Last updated 08/21/2020-+ # Application gateway components
A frontend IP address is the IP address associated with an application gateway.
The Azure Application Gateway V2 SKU can be configured to support either both static internal IP address and static public IP address, or only static public IP address. It cannot be configured to support only static internal IP address.
-The V1 SKU can be configured to support static or dynamic internal IP address and dynamic public IP address. The dynamic IP address of Application Gateway does not change on a running gateway. It can change only when you stop or start the Gateway. It does not change on system failures, updates, Azure host updates etc.
+The V1 SKU can be configured to support static or dynamic internal IP address and dynamic public IP address. The dynamic IP address of Application Gateway doesn't change on a running gateway. It can change only when you stop or start the Gateway. It doesn't change on system failures, updates, Azure host updates etc.
The DNS name associated with an application gateway doesn't change over the lifecycle of the gateway. As a result, you should use a CNAME alias and point it to the DNS address of the application gateway.
After you create a listener, you associate it with a request routing rule. This
## Request routing rules
-A request routing rule is a key component of an application gateway because it determines how to route traffic on the listener. The rule binds the listener, the back-end server pool, and the backend HTTP settings.
+A request routing rule is a key component of an application gateway because it determines how to route traffic on the listener. The rule binds the listener, the backend server pool, and the backend HTTP settings.
When a listener accepts a request, the request routing rule forwards the request to the backend or redirects it elsewhere. If the request is forwarded to the backend, the request routing rule defines which backend server pool to forward it to. The request routing rule also determines if the headers in the request are to be rewritten. One listener can be attached to one rule.
You can create different backend pools for different types of requests. For exam
By default, an application gateway monitors the health of all resources in its backend pool and automatically removes unhealthy ones. It then monitors unhealthy instances and adds them back to the healthy backend pool when they become available and respond to health probes.
-In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. Custom probes allow more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the back-end pool instance as unhealthy, custom status codes and response body match, etc. We recommend that you configure custom probes to monitor the health of each backend pool.
+In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. Custom probes allow more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the backend pool instance as unhealthy, custom status codes and response body match, etc. We recommend that you configure custom probes to monitor the health of each backend pool.
For more information, see [Monitor the health of your application gateway](../application-gateway/application-gateway-probe-overview.md).
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
Title: Configure listener-specific SSL policies on Azure Application Gateway through portal description: Learn how to configure listener-specific SSL policies on Application Gateway through portal -+ Last updated 02/18/2022-+ # Configure listener-specific SSL policies on Application Gateway through portal
Before you proceed, here are some important points related to listener-specific
- You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication or listener-specific SSL policy configured, or both configured in your SSL profile. - Using a new Predefined or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies.
- Consider this example, you are currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. To use a &#34;new&#34; Predefined or Customv2 policy for any one of them will also require you to upgrade the other configuration. You may use the new predefined policies, or customv2 policy, or combination of these across the gateway.
+ Consider this example, you're currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. To use a &#34;new&#34; Predefined or Customv2 policy for any one of them will also require you to upgrade the other configuration. You may use the new predefined policies, or customv2 policy, or combination of these across the gateway.
To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
Now that we've created an SSL profile with a listener-specific SSL policy, we ne
![Associate SSL profile to new listener](./media/mutual-authentication-portal/mutual-authentication-listener-portal.png) ### Limitations
-There is a limitation right now on Application Gateway where different listeners using the same port cannot have SSL policies (predefined or custom) with different TLS protocol versions. Choosing the same TLS version for different listeners will work for configuring cipher suite preference for each listener. However, to use different TLS protocol versions for separate listeners, you will need to use distinct ports for each.
+There is a limitation right now on Application Gateway that different listeners using the same port cannot have SSL policies (predefined or custom) with different TLS protocol versions. Choosing the same TLS version for different listeners will work for configuring cipher suite preference for each listener. However, to use different TLS protocol versions for separate listeners, you will need to use distinct ports for each.
## Next steps
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
Set-AzApplicationGateway -ApplicationGateway $gw
``` > [!IMPORTANT]
-> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
-> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3. These two cipher suites will not appear in the Get Details output, with an exception of Portal.
+> - If you're using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
+> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3. These two cipher suites won't appear in the Get Details output, with an exception of Portal.
To set minimum protocol version to 1.3, you must use the following command:
application-gateway Application Gateway Create Probe Classic Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-classic-ps.md
To create an application gateway:
### Create an application gateway resource with a custom probe
-To create the gateway, use the `New-AzureApplicationGateway` cmdlet, replacing the values with your own. Billing for the gateway does not start at this point. Billing begins in a later step, when the gateway is successfully started.
+To create the gateway, use the `New-AzureApplicationGateway` cmdlet, replacing the values with your own. Billing for the gateway doesn't start at this point. Billing begins in a later step, when the gateway is successfully started.
The following example creates an application gateway by using a virtual network called "testvnet1" and a subnet called "subnet-1".
Copy the following text to Notepad.
Edit the values between the parentheses for the configuration items. Save the file with extension .xml.
-The following example shows how to use a configuration file to set up the application gateway to load balance HTTP traffic on public port 80 and send network traffic to back-end port 80 between two IP addresses by using a custom probe.
+The following example shows how to use a configuration file to set up the application gateway to load balance HTTP traffic on public port 80 and send network traffic to backend port 80 between two IP addresses by using a custom probe.
> [!IMPORTANT] > The protocol item Http or Https is case-sensitive.
The configuration parameters are:
| **Host** and **Path** | Complete URL path that is invoked by the application gateway to determine the health of the instance. For example, if you have a website http:\//contoso.com/, then the custom probe can be configured for "http:\//contoso.com/path/custompath.htm" for probe checks to have a successful HTTP response.| | **Interval** | Configures the probe interval checks in seconds.| | **Timeout** | Defines the probe time-out for an HTTP response check.|
-| **UnhealthyThreshold** | The number of failed HTTP responses needed to flag the back-end instance as *unhealthy*.|
+| **UnhealthyThreshold** | The number of failed HTTP responses needed to flag the backend instance as *unhealthy*.|
-The probe name is referenced in the \<BackendHttpSettings\> configuration to assign which back-end pool uses custom probe settings.
+The probe name is referenced in the \<BackendHttpSettings\> configuration to assign which backend pool uses custom probe settings.
## Add a custom probe to an existing application gateway
application-gateway Application Gateway Create Probe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-portal.md
> * [Azure Resource Manager PowerShell](application-gateway-create-probe-ps.md) > * [Azure Classic PowerShell](application-gateway-create-probe-classic-ps.md)
-In this article, you add a custom health probe to an existing application gateway through the Azure portal. Azure Application Gateway uses these health probes to monitor the health of the resources in the back-end pool.
+In this article, you add a custom health probe to an existing application gateway through the Azure portal. Azure Application Gateway uses these health probes to monitor the health of the resources in the backend pool.
## Before you begin
Probes are configured in a two-step process through the portal. The first step i
|**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. | |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to \<protocol\>://\<host name\>:\<port\>/\<urlPath\> This can also be the private IP address of the server, or the public IP address, or the DNS entry of the public IP address. The probe will attempt to access the server when used with a file based path entry, and validate a specific file exists on the server as a health check.|
- |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
+ |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-backend-address)|
|**Pick port from backend HTTP settings**| Yes or No|Sets the *port* of the health probe to the port from HTTP settings to which this probe is associated. If you choose no, you can enter a custom destination port to use | |**Port**| 1-65535 | Custom port to be used for the health probes | |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
Probes are configured in a two-step process through the portal. The first step i
|**HTTP Settings**|selection from dropdown|Probe will get associated with the HTTP settings selected here and therefore, will monitor the health of that backend pool, which is associated with the selected HTTP setting. It will use the same port for the probe request as the one being used in the selected HTTP setting. You can only choose those HTTP settings, which aren't associated with any other custom probe. <br>The only HTTP settings that are available for association are those that have the same protocol as the protocol chosen in this probe configuration, and have the same state for the *Pick Host Name From Backend HTTP setting* switch.| > [!IMPORTANT]
- > The probe will monitor health of the backend only when it's associated with one or more HTTP settings. It will monitor back-end resources of those back-end pools which are associated to the HTTP settings to which this probe is associated with. The probe request will be sent as \<protocol\>://\<hostName\>:\<port\>/\<urlPath\>.
+ > The probe will monitor health of the backend only when it's associated with one or more HTTP settings. It will monitor backend resources of those backend pools which are associated to the HTTP settings to which this probe is associated with. The probe request will be sent as \<protocol\>://\<hostName\>:\<port\>/\<urlPath\>.
### Test backend health with the probe
-After entering the probe properties, you can test the health of the back-end resources to verify that the probe configuration is correct and that the back-end resources are working as expected.
+After entering the probe properties, you can test the health of the backend resources to verify that the probe configuration is correct and that the backend resources are working as expected.
1. Select **Test** and note the result of the probe. The Application gateway tests the health of all the backend resources in the backend pools associated with the HTTP settings used for this probe.
After entering the probe properties, you can test the health of the back-end res
2. If there are any unhealthy backend resources, then check the **Details** column to understand the reason for unhealthy state of the resource. If the resource has been marked unhealthy due to an incorrect probe configuration, then select the **Go back to probe** link and edit the probe configuration. Otherwise, if the resource has been marked unhealthy due to an issue with the backend, then resolve the issues with the backend resource and then test the backend again by selecting the **Go back to probe** link and select **Test**. > [!NOTE]
- > You can choose to save the probe even with unhealthy backend resources, but it isn't recommended. This is because the Application Gateway will not forward requests to the backend servers from the backend pool, which are determined to be unhealthy by the probe. In case there are no healthy resources in a backend pool, you will not be able to access your application and will get a HTTP 502 error.
+ > You can choose to save the probe even with unhealthy backend resources, but it isn't recommended. This is because the Application Gateway won't forward requests to the backend servers from the backend pool, which are determined to be unhealthy by the probe. In case there are no healthy resources in a backend pool, you won't be able to access your application and will get a HTTP 502 error.
![View probe result][6]
Probes are configured in a two-step process through the portal. The first step i
|**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. | |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to (protocol)://(host name):(port from httpsetting)/urlPath. This is applicable when multi-site is configured on Application Gateway. If the Application Gateway is configured for a single site, then enter '127.0.0.1'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
- |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the back-end resource in the back-end pool associated with the HTTP Setting to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
+ |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the backend resource in the backend pool associated with the HTTP Setting to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-backend-address)|
|**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/' You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.| |**Interval (secs)**|30|How often the probe is run to check for health. It isn't recommended to set the lower than 30 seconds.| |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response isn't received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. The time-out value shouldn't be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting, which will be associated with this probe.|
Now that the probe has been created, it's time to add it to the gateway. Probe s
## Next steps
-View the health of the backend resources as determined by the probe using the [backend health view](./application-gateway-diagnostics.md#back-end-health).
+View the health of the backend resources as determined by the probe using the [backend health view](./application-gateway-diagnostics.md#backend-health).
[1]: ./media/application-gateway-create-probe-portal/figure1.png [2]: ./media/application-gateway-create-probe-portal/figure2.png
application-gateway Application Gateway Create Probe Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-ps.md
$vnet = New-AzVirtualNetwork -Name appgwvnet -ResourceGroupName appgw-rg -Locati
$subnet = $vnet.Subnets[0] ```
-### Create a public IP address for the front-end configuration
+### Create a public IP address for the frontend configuration
-Create a public IP resource **publicIP01** in resource group **appgw-rg** for the West US region. This example uses a public IP address for the front-end IP address of the application gateway. Application gateway requires the public IP address to have a dynamically created DNS name therefore the `-DomainNameLabel` cannot be specified during the creation of the public IP address.
+Create a public IP resource **publicIP01** in resource group **appgw-rg** for the West US region. This example uses a public IP address for the frontend IP address of the application gateway. Application gateway requires the public IP address to have a dynamically created DNS name therefore the `-DomainNameLabel` cannot be specified during the creation of the public IP address.
```powershell $publicip = New-AzPublicIpAddress -ResourceGroupName appgw-rg -Name publicIP01 -Location 'West US' -AllocationMethod Dynamic
You set up all configuration items before creating the application gateway. The
# Creates an application gateway Frontend IP configuration named gatewayIP01 $gipconfig = New-AzApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet
-#Creates a back-end IP address pool named pool01 with IP addresses 134.170.185.46, 134.170.188.221, 134.170.185.50.
+#Creates a backend IP address pool named pool01 with IP addresses 134.170.185.46, 134.170.188.221, 134.170.185.50.
$pool = New-AzApplicationGatewayBackendAddressPool -Name pool01 -BackendIPAddresses 134.170.185.46, 134.170.188.221, 134.170.185.50 # Creates a probe that will check health at http://contoso.com/path/path.htm
$poolSetting = New-AzApplicationGatewayBackendHttpSettings -Name poolsetting01 -
# Creates a frontend port for the application gateway to listen on port 80 that will be used by the listener. $fp = New-AzApplicationGatewayFrontendPort -Name frontendport01 -Port 80
-# Creates a frontend IP configuration. This associates the $publicip variable defined previously with the front-end IP that will be used by the listener.
+# Creates a frontend IP configuration. This associates the $publicip variable defined previously with the frontend IP that will be used by the listener.
$fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name fipconfig01 -PublicIPAddress $publicip # Creates the listener. The listener is a combination of protocol and the frontend IP configuration $fipconfig and frontend port $fp created in previous steps.
Set-AzApplicationGateway -ApplicationGateway $getgw
## Get application gateway DNS name
-Once the gateway is created, the next step is to configure the front end for communication. When using a public IP, application gateway requires a dynamically assigned DNS name, which is not friendly. To ensure end users can hit the application gateway a CNAME record can be used to point to the public endpoint of the application gateway. [Configuring a custom domain name for in Azure](../cloud-services/cloud-services-custom-domain-name-portal.md). To do this, retrieve details of the application gateway and its associated IP/DNS name using the PublicIPAddress element attached to the application gateway. The application gateway's DNS name should be used to create a CNAME record, which points the two web applications to this DNS name. The use of A-records is not recommended since the VIP may change on restart of application gateway.
+Once the gateway is created, the next step is to configure the front end for communication. When you're using a public IP address, application gateway requires a dynamically assigned DNS name, which is not friendly. To ensure end users can hit the application gateway a CNAME record can be used to point to the public endpoint of the application gateway. [Configuring a custom domain name for in Azure](../cloud-services/cloud-services-custom-domain-name-portal.md). To do this, retrieve details of the application gateway and its associated IP/DNS name using the PublicIPAddress element attached to the application gateway. The application gateway's DNS name should be used to create a CNAME record, which points the two web applications to this DNS name. The use of A-records is not recommended since the VIP may change on restart of application gateway.
```powershell Get-AzPublicIpAddress -ResourceGroupName appgw-RG -Name publicIP01
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Title: Back-end health and diagnostic logs
+ Title: Backend health and diagnostic logs
description: Learn how to enable and manage access logs and performance logs for Azure Application Gateway
-# Back-end health and diagnostic logs for Application Gateway
+# Backend health and diagnostic logs for Application Gateway
You can monitor Azure Application Gateway resources in the following ways:
-* [Back-end health](#back-end-health): Application Gateway provides the capability to monitor the health of the servers in the back-end pools through the Azure portal and through PowerShell. You can also find the health of the back-end pools through the performance diagnostic logs.
+* [Backend health](#backend-health): Application Gateway provides the capability to monitor the health of the servers in the backend pools through the Azure portal and through PowerShell. You can also find the health of the backend pools through the performance diagnostic logs.
* [Logs](#diagnostic-logging): Logs allow for performance, access, and other data to be saved or consumed from a resource for monitoring purposes.
You can monitor Azure Application Gateway resources in the following ways:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## Back-end health
+## Backend health
-Application Gateway provides the capability to monitor the health of individual members of the back-end pools through the portal, PowerShell, and the command-line interface (CLI). You can also find an aggregated health summary of back-end pools through the performance diagnostic logs.
+Application Gateway provides the capability to monitor the health of individual members of the backend pools through the portal, PowerShell, and the command-line interface (CLI). You can also find an aggregated health summary of backend pools through the performance diagnostic logs.
-The back-end health report reflects the output of the Application Gateway health probe to the back-end instances. When probing is successful and the back end can receive traffic, it's considered healthy. Otherwise, it's considered unhealthy.
+The backend health report reflects the output of the Application Gateway health probe to the backend instances. When probing is successful and the back end can receive traffic, it's considered healthy. Otherwise, it's considered unhealthy.
> [!IMPORTANT]
-> If there is a network security group (NSG) on an Application Gateway subnet, open port ranges 65503-65534 for v1 SKUs, and 65200-65535 for v2 SKUs on the Application Gateway subnet for inbound traffic. This port range is required for Azure infrastructure communication. They are protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, will not be able to initiate any changes on those endpoints.
+> If there is a network security group (NSG) on an Application Gateway subnet, open port ranges 65503-65534 for v1 SKUs, and 65200-65535 for v2 SKUs on the Application Gateway subnet for inbound traffic. This port range is required for Azure infrastructure communication. They are protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, won't be able to initiate any changes on those endpoints.
-### View back-end health through the portal
+### View backend health through the portal
-In the portal, back-end health is provided automatically. In an existing application gateway, select **Monitoring** > **Backend health**.
+In the portal, backend health is provided automatically. In an existing application gateway, select **Monitoring** > **Backend health**.
-Each member in the back-end pool is listed on this page (whether it's a NIC, IP, or FQDN). Back-end pool name, port, back-end HTTP settings name, and health status are shown. Valid values for health status are **Healthy**, **Unhealthy**, and **Unknown**.
+Each member in the backend pool is listed on this page (whether it's a NIC, IP, or FQDN). Backend pool name, port, backend HTTP settings name, and health status are shown. Valid values for health status are **Healthy**, **Unhealthy**, and **Unknown**.
> [!NOTE]
-> If you see a back-end health status of **Unknown**, ensure that access to the back end is not blocked by an NSG rule, a user-defined route (UDR), or a custom DNS in the virtual network.
+> If you see a backend health status of **Unknown**, ensure that access to the back end is not blocked by an NSG rule, a user-defined route (UDR), or a custom DNS in the virtual network.
-![Back-end health][10]
+![Backend health][10]
-### View back-end health through PowerShell
+### View backend health through PowerShell
-The following PowerShell code shows how to view back-end health by using the `Get-AzApplicationGatewayBackendHealth` cmdlet:
+The following PowerShell code shows how to view backend health by using the `Get-AzApplicationGatewayBackendHealth` cmdlet:
```powershell Get-AzApplicationGatewayBackendHealth -Name ApplicationGateway1 -ResourceGroupName Contoso ```
-### View back-end health through Azure CLI
+### View backend health through Azure CLI
```azurecli az network application-gateway show-backend-health --resource-group AdatumAppGatewayRG --name AdatumAppGateway
You can use different types of logs in Azure to manage and troubleshoot applicat
* **Activity log**: You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal. * **Access log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.
-* **Performance log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy back-end instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data.
+* **Performance log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data.
* **Firewall log**: You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds. > [!NOTE]
The access log is generated only if you've enabled it on each Application Gatewa
|clientPort | Originating port for the request. | |httpMethod | HTTP method used by the request. | |requestUri | URI of the received request. |
-|RequestQuery | **Server-Routed**: Back-end pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the back-end servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
+|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
|UserAgent | User agent from the HTTP request header. | |httpStatus | HTTP status code returned to the client from Application Gateway. | |httpVersion | HTTP version of the request. | |receivedBytes | Size of packet received, in bytes. | |sentBytes| Size of packet sent, in bytes.| |timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
-|sslEnabled| Whether communication to the back-end pools used TLS/SSL. Valid values are on and off.|
+|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.| |originalHost| The hostname with which the request was received by the Application Gateway from the client.|
The access log is generated only if you've enabled it on each Application Gatewa
|WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | |WAFMode| Value can be either Detection or Prevention | |transactionId| Unique identifier to correlate the request received from the client |
-|sslEnabled| Whether communication to the back-end pools used TLS. Valid values are on and off.|
+|sslEnabled| Whether communication to the backend pools used TLS. Valid values are on and off.|
|sslCipher| Cipher suite being used for TLS communication (if TLS is enabled).| |sslProtocol| SSL/TLS protocol being used (if TLS is enabled).| |serverRouted| The backend server that application gateway routes the request to.|
The performance log is generated only if you have enabled it on each Application
|Value |Description | ||| |instanceId | Application Gateway instance for which performance data is being generated. For a multiple-instance application gateway, there is one row per instance. |
-|healthyHostCount | Number of healthy hosts in the back-end pool. |
-|unHealthyHostCount | Number of unhealthy hosts in the back-end pool. |
+|healthyHostCount | Number of healthy hosts in the backend pool. |
+|unHealthyHostCount | Number of unhealthy hosts in the backend pool. |
|requestCount | Number of requests served. | |latency | Average latency (in milliseconds) of requests from the instance to the back end that serves the requests. | |failedRequestCount| Number of failed requests.|
You can view and analyze activity log data by using any of the following methods
You can also connect to your storage account and retrieve the JSON log entries for access and performance logs. After you download the JSON files, you can convert them to CSV and view them in Excel, Power BI, or any other data-visualization tool. > [!TIP]
-> If you are familiar with Visual Studio and basic concepts of changing values for constants and variables in C#, you can use the [log converter tools](https://github.com/Azure-Samples/networking-dotnet-log-converter) available from GitHub.
+> If you're familiar with Visual Studio and basic concepts of changing values for constants and variables in C#, you can use the [log converter tools](https://github.com/Azure-Samples/networking-dotnet-log-converter) available from GitHub.
> >
application-gateway Application Gateway End To End Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-end-to-end-ssl-powershell.md
## Overview
-Azure Application Gateway supports end-to-end encryption of traffic. Application Gateway terminates the TLS/SSL connection at the application gateway. The gateway then applies the routing rules to the traffic, re-encrypts the packet, and forwards the packet to the appropriate back-end server based on the routing rules defined. Any response from the web server goes through the same process back to the end user.
+Azure Application Gateway supports end-to-end encryption of traffic. Application Gateway terminates the TLS/SSL connection at the application gateway. The gateway then applies the routing rules to the traffic, re-encrypts the packet, and forwards the packet to the appropriate backend server based on the routing rules defined. Any response from the web server goes through the same process back to the end user.
Application Gateway supports defining custom TLS options. It also supports disabling the following protocol versions: **TLSv1.0**, **TLSv1.1**, and **TLSv1.2**, as well defining which cipher suites to use and the order of preference. To learn more about configurable TLS options, see the [TLS policy overview](application-gateway-SSL-policy-overview.md).
This scenario will:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To configure end-to-end TLS with an application gateway, a certificate is required for the gateway and certificates are required for the back-end servers. The gateway certificate is used to derive a symmetric key as per TLS protocol specification. The symmetric key is then used encrypt and decrypt the traffic sent to the gateway. The gateway certificate needs to be in Personal Information Exchange (PFX) format. This file format allows you to export the private key that is required by the application gateway to perform the encryption and decryption of traffic.
+To configure end-to-end TLS with an application gateway, a certificate is required for the gateway and certificates are required for the backend servers. The gateway certificate is used to derive a symmetric key as per TLS protocol specification. The symmetric key is then used encrypt and decrypt the traffic sent to the gateway. The gateway certificate needs to be in Personal Information Exchange (PFX) format. This file format allows you to export the private key that is required by the application gateway to perform the encryption and decryption of traffic.
-For end-to-end TLS encryption, the back end must be explicitly allowed by the application gateway. Upload the public certificate of the back-end servers to the application gateway. Adding the certificate ensures that the application gateway only communicates with known back-end instances. This further secures the end-to-end communication.
+For end-to-end TLS encryption, the back end must be explicitly allowed by the application gateway. Upload the public certificate of the backend servers to the application gateway. Adding the certificate ensures that the application gateway only communicates with known backend instances. This further secures the end-to-end communication.
The configuration process is described in the following sections.
The following example creates a virtual network and two subnets. One subnet is u
> Subnets configured for an application gateway should be properly sized. An application gateway can be configured for up to 10 instances. Each instance takes one IP address from the subnet. Too small of a subnet can adversely affect scaling out an application gateway. >
-2. Assign an address range to be used for the back-end address pool.
+2. Assign an address range to be used for the backend address pool.
```powershell $nicSubnet = New-AzVirtualNetworkSubnetConfig -Name 'appsubnet' -AddressPrefix 10.0.2.0/24
The following example creates a virtual network and two subnets. One subnet is u
$nicSubnet = Get-AzVirtualNetworkSubnetConfig -Name 'appsubnet' -VirtualNetwork $vnet ```
-## Create a public IP address for the front-end configuration
+## Create a public IP address for the frontend configuration
Create a public IP resource to be used for the application gateway. This public IP address is used in one of the steps that follow.
$publicip = New-AzPublicIpAddress -ResourceGroupName appgw-rg -Name 'publicIP01'
``` > [!IMPORTANT]
-> Application Gateway does not support the use of a public IP address created with a defined domain label. Only a public IP address with a dynamically created domain label is supported. If you require a friendly DNS name for the application gateway, we recommend you use a CNAME record as an alias.
+> Application Gateway doesn't support the use of a public IP address created with a defined domain label. Only a public IP address with a dynamically created domain label is supported. If you require a friendly DNS name for the application gateway, we recommend you use a CNAME record as an alias.
## Create an application gateway configuration object All configuration items are set before creating the application gateway. The following steps create the configuration items that are needed for an application gateway resource.
-1. Create an application gateway IP configuration. This setting configures which of the subnets the application gateway uses. When application gateway starts, it picks up an IP address from the configured subnet and routes network traffic to the IP addresses in the back-end IP pool. Keep in mind that each instance takes one IP address.
+1. Create an application gateway IP configuration. This setting configures which of the subnets the application gateway uses. When application gateway starts, it picks up an IP address from the configured subnet and routes network traffic to the IP addresses in the backend IP pool. Keep in mind that each instance takes one IP address.
```powershell $gipconfig = New-AzApplicationGatewayIPConfiguration -Name 'gwconfig' -Subnet $gwSubnet ```
-2. Create a front-end IP configuration. This setting maps a private or public IP address to the front end of the application gateway. The following step associates the public IP address in the preceding step with the front-end IP configuration.
+2. Create a frontend IP configuration. This setting maps a private or public IP address to the front end of the application gateway. The following step associates the public IP address in the preceding step with the frontend IP configuration.
```powershell $fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name 'fip01' -PublicIPAddress $publicip ```
-3. Configure the back-end IP address pool with the IP addresses of the back-end web servers. These IP addresses are the IP addresses that receive the network traffic that comes from the front-end IP endpoint. Replace the IP addresses in the sample with your own application IP address endpoints.
+3. Configure the backend IP address pool with the IP addresses of the backend web servers. These IP addresses are the IP addresses that receive the network traffic that comes from the frontend IP endpoint. Replace the IP addresses in the sample with your own application IP address endpoints.
```powershell $pool = New-AzApplicationGatewayBackendAddressPool -Name 'pool01' -BackendIPAddresses 1.1.1.1, 2.2.2.2, 3.3.3.3 ``` > [!NOTE]
- > A fully qualified domain name (FQDN) is also a valid value to use in place of an IP address for the back-end servers. You enable it by using the **-BackendFqdns** switch.
+ > A fully qualified domain name (FQDN) is also a valid value to use in place of an IP address for the backend servers. You enable it by using the **-BackendFqdns** switch.
-4. Configure the front-end IP port for the public IP endpoint. This port is the port that end users connect to.
+4. Configure the frontend IP port for the public IP endpoint. This port is the port that end users connect to.
```powershell $fp = New-AzApplicationGatewayFrontendPort -Name 'port01' -Port 443
All configuration items are set before creating the application gateway. The fol
> [!NOTE] > This sample configures the certificate used for the TLS connection. The certificate needs to be in .pfx format.
-6. Create the HTTP listener for the application gateway. Assign the front-end IP configuration, port, and TLS/SSL certificate to use.
+6. Create the HTTP listener for the application gateway. Assign the frontend IP configuration, port, and TLS/SSL certificate to use.
```powershell $listener = New-AzApplicationGatewayHttpListener -Name listener01 -Protocol Https -FrontendIPConfiguration $fipconfig -FrontendPort $fp -SSLCertificate $cert ```
-7. Upload the certificate to be used on the TLS-enabled back-end pool resources.
+7. Upload the certificate to be used on the TLS-enabled backend pool resources.
> [!NOTE]
- > The default probe gets the public key from the *default* TLS binding on the back-end's IP address and compares the public key value it receives to the public key value you provide here.
+ > The default probe gets the public key from the *default* TLS binding on the backend's IP address and compares the public key value it receives to the public key value you provide here.
>
- > If you are using host headers and Server Name Indication (SNI) on the back end, the retrieved public key might not be the intended site to which traffic flows. If you're in doubt, visit https://127.0.0.1/ on the back-end servers to confirm which certificate is used for the *default* TLS binding. Use the public key from that request in this section. If you are using host-headers and SNI on HTTPS bindings and you do not receive a response and certificate from a manual browser request to https://127.0.0.1/ on the back-end servers, you must set up a default TLS binding on the them. If you do not do so, probes fail and the back end is not allowed.
+ > If you're using host headers and Server Name Indication (SNI) on the back end, the retrieved public key might not be the intended site to which traffic flows. If you're in doubt, visit https://127.0.0.1/ on the backend servers to confirm which certificate is used for the *default* TLS binding. Use the public key from that request in this section. If you're using host-headers and SNI on HTTPS bindings and you do not receive a response and certificate from a manual browser request to https://127.0.0.1/ on the backend servers, you must set up a default TLS binding on the them. If you do not do so, probes fail and the back end is not allowed.
For more information about SNI in Application Gateway, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md).
All configuration items are set before creating the application gateway. The fol
``` > [!NOTE]
- > The certificate provided in the previous step should be the public key of the .pfx certificate present on the back end. Export the certificate (not the root certificate) installed on the back-end server in Claim, Evidence, and Reasoning (CER) format and use it in this step. This step allows the back end with the application gateway.
+ > The certificate provided in the previous step should be the public key of the .pfx certificate present on the back end. Export the certificate (not the root certificate) installed on the backend server in Claim, Evidence, and Reasoning (CER) format and use it in this step. This step allows the back end with the application gateway.
- If you are using the Application Gateway v2 SKU, then create a trusted root certificate instead of an authentication certificate. For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku):
+ If you're using the Application Gateway v2 SKU, then create a trusted root certificate instead of an authentication certificate. For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku):
```powershell $trustedRootCert01 = New-AzApplicationGatewayTrustedRootCertificate -Name "test1" -CertificateFile <path to root cert file>
For V2 SKU use the below command
$appgw = New-AzApplicationGateway -Name appgateway -SSLCertificates $cert -ResourceGroupName "appgw-rg" -Location "West US" -BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting01 -FrontendIpConfigurations $fipconfig -GatewayIpConfigurations $gipconfig -FrontendPorts $fp -HttpListeners $listener -RequestRoutingRules $rule -Sku $sku -SSLPolicy $SSLPolicy -TrustedRootCertificate $trustedRootCert01 -Verbose ```
-## Apply a new certificate if the back-end certificate is expired
+## Apply a new certificate if the backend certificate is expired
-Use this procedure to apply a new certificate if the back-end certificate is expired.
+Use this procedure to apply a new certificate if the backend certificate is expired.
1. Retrieve the application gateway to update.
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
This article walks you through the steps to configure a Standard v1 Application
## What is required to create an application gateway?
-* **Back-end server pool:** The list of IP addresses of the back-end servers. The IP addresses listed should either belong to the virtual network but in a different subnet for the application gateway or should be a public IP/VIP.
-* **Back-end server pool settings:** Every pool has settings like port, protocol, and cookie-based affinity. These settings are tied to a pool and are applied to all servers within the pool.
-* **Front-end port:** This port is the public port that is opened on the application gateway. Traffic hits this port, and then gets redirected to one of the back-end servers.
-* **Listener:** The listener has a front-end port, a protocol (Http or Https, these are case-sensitive), and the SSL certificate name (if configuring SSL offload).
-* **Rule:** The rule binds the listener and the back-end server pool and defines which back-end server pool the traffic should be directed to when it hits a particular listener. Currently, only the *basic* rule is supported. The *basic* rule is round-robin load distribution.
+* **Backend server pool:** The list of IP addresses of the backend servers. The IP addresses listed should either belong to the virtual network but in a different subnet for the application gateway or should be a public IP/VIP.
+* **Backend server pool settings:** Every pool has settings like port, protocol, and cookie-based affinity. These settings are tied to a pool and are applied to all servers within the pool.
+* **Frontend port:** This port is the public port that is opened on the application gateway. Traffic hits this port, and then gets redirected to one of the backend servers.
+* **Listener:** The listener has a frontend port, a protocol (Http or Https, these are case-sensitive), and the SSL certificate name (if configuring SSL offload).
+* **Rule:** The rule binds the listener and the backend server pool and defines which backend server pool the traffic should be directed to when it hits a particular listener. Currently, only the *basic* rule is supported. The *basic* rule is round-robin load distribution.
## Create an application gateway
This step assigns the subnet object to variable $subnet for the next steps.
$gipconfig = New-AzApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet ```
-This step creates an application gateway IP configuration named "gatewayIP01". When Application Gateway starts, it picks up an IP address from the subnet configured and route network traffic to the IP addresses in the back-end IP pool. Keep in mind that each instance takes one IP address.
+This step creates an application gateway IP configuration named "gatewayIP01". When Application Gateway starts, it picks up an IP address from the subnet configured and route network traffic to the IP addresses in the backend IP pool. Keep in mind that each instance takes one IP address.
### Step 2
This step creates an application gateway IP configuration named "gatewayIP01". W
$pool = New-AzApplicationGatewayBackendAddressPool -Name pool01 -BackendIPAddresses 10.1.1.8,10.1.1.9,10.1.1.10 ```
-This step configures the back-end IP address pool named "pool01" with IP addresses "10.1.1.8, 10.1.1.9, 10.1.1.10". Those are the IP addresses that receive the network traffic that comes from the front-end IP endpoint. You replace the preceding IP addresses to add your own application IP address endpoints.
+This step configures the backend IP address pool named "pool01" with IP addresses "10.1.1.8, 10.1.1.9, 10.1.1.10". Those are the IP addresses that receive the network traffic that comes from the frontend IP endpoint. You replace the preceding IP addresses to add your own application IP address endpoints.
### Step 3
This step configures the back-end IP address pool named "pool01" with IP address
$poolSetting = New-AzApplicationGatewayBackendHttpSettings -Name poolsetting01 -Port 80 -Protocol Http -CookieBasedAffinity Disabled ```
-This step configures application gateway setting "poolsetting01" for the load balanced network traffic in the back-end pool.
+This step configures application gateway setting "poolsetting01" for the load balanced network traffic in the backend pool.
### Step 4
This step configures application gateway setting "poolsetting01" for the load ba
$fp = New-AzApplicationGatewayFrontendPort -Name frontendport01 -Port 80 ```
-This step configures the front-end IP port named "frontendport01" for the ILB.
+This step configures the frontend IP port named "frontendport01" for the ILB.
### Step 5
This step configures the front-end IP port named "frontendport01" for the ILB.
$fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name fipconfig01 -Subnet $subnet ```
-This step creates the front-end IP configuration called "fipconfig01" and associates it with a private IP from the current virtual network subnet.
+This step creates the frontend IP configuration called "fipconfig01" and associates it with a private IP from the current virtual network subnet.
### Step 6
This step creates the front-end IP configuration called "fipconfig01" and associ
$listener = New-AzApplicationGatewayHttpListener -Name listener01 -Protocol Http -FrontendIPConfiguration $fipconfig -FrontendPort $fp ```
-This step creates the listener called "listener01" and associates the front-end port to the front-end IP configuration.
+This step creates the listener called "listener01" and associates the frontend port to the frontend IP configuration.
### Step 7
Get-AzApplicationGateway -Name appgwtest -ResourceGroupName appgw-rg
``` VERBOSE: 10:52:46 PM - Begin Operation: Get-AzureApplicationGateway
-Get-AzureApplicationGateway : ResourceNotFound: The gateway does not exist.
+Get-AzureApplicationGateway : ResourceNotFound: The gateway doesn't exist.
``` ## Next steps
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-key-vault-common-errors.md
Title: Common key vault errors in Application Gateway description: This article identifies key vault-related problems, and helps you resolve them for smooth operations of Application Gateway.-+ Last updated 07/26/2022-+
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
Title: Health monitoring overview for Azure Application Gateway
-description: Azure Application Gateway monitors the health of all resources in its back-end pool and automatically removes any resource considered unhealthy from the pool.
+description: Azure Application Gateway monitors the health of all resources in its backend pool and automatically removes any resource considered unhealthy from the pool.
# Application Gateway health monitoring overview
-Azure Application Gateway by default monitors the health of all resources in its back-end pool and automatically removes any resource considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds them back to the healthy back-end pool once they become available and respond to health probes. By default, Application gateway sends the health probes with the same port that is defined in the back-end HTTP settings. A custom probe port can be configured using a custom health probe.
+Azure Application Gateway by default monitors the health of all resources in its backend pool and automatically removes any resource considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds them back to the healthy backend pool once they become available and respond to health probes. By default, Application gateway sends the health probes with the same port that is defined in the backend HTTP settings. A custom probe port can be configured using a custom health probe.
-The source IP address Application Gateway uses for health probes depends on the backend pool:
+The source IP address that Application Gateway uses for health probes will depend on the backend pool:
- If the server address in the backend pool is a public endpoint, then the source address is the application gateway's frontend public IP address. - If the server address in the backend pool is a private endpoint, then the source IP address is from the application gateway subnet's private IP address space.
In addition to using default health probe monitoring, you can also customize the
## Default health probe
-An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the back-end pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
+An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the backend pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
-For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
+For example: You configure your application gateway to use backend servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
If the default probe check fails for server A, the application gateway stops forwarding requests to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, application gateway starts forwarding the requests to the server again.
If the default probe check fails for server A, the application gateway stops for
| Probe URL |\<protocol\>://127.0.0.1:\<port\>/ |The protocol and port are inherited from the backend HTTP settings to which the probe is associated | | Interval |30 |The amount of time in seconds to wait before the next health probe is sent.| | Time-out |30 |The amount of time in seconds the application gateway waits for a probe response before marking the probe as unhealthy. If a probe returns as healthy, the corresponding backend is immediately marked as healthy.|
-| Unhealthy threshold |3 |Governs how many probes to send in case there's a failure of the regular health probe. In v1 SKU, these additional health probes are sent in quick succession to determine the health of the backend quickly and don't wait for the probe interval. In the case of v2 SKU, the health probes wait the interval. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
+| Unhealthy threshold |3 |Governs how many probes to send in case there's a failure of the regular health probe. In v1 SKU, these additional health probes are sent in quick succession to determine the health of the backend quickly and don't wait for the probe interval. For v2 SKU, the health probes wait the interval. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
The default probe looks only at \<protocol\>:\//127.0.0.1:\<port\> to determine health status. If you need to configure the health probe to go to a custom URL or modify any other settings, you must use custom probes. For more information about HTTPS probes, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md#for-probe-traffic).
Also if there are multiple listeners, then each listener probes the backend inde
## Custom health probe
-Custom probes allow you to have more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the back-end pool instance as unhealthy, etc.
+Custom probes allow you to have more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the backend pool instance as unhealthy, etc.
### Custom health probe settings
The following table provides definitions for the properties of a custom health p
| Probe property | Description | | | |
-| Name |Name of the probe. This name is used to identify and refer to the probe in back-end HTTP settings. |
-| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the back-end HTTP settings it is associated to|
+| Name |Name of the probe. This name is used to identify and refer to the probe in backend HTTP settings. |
+| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the backend HTTP settings it is associated to|
| Host |Host name to send the probe with. In v1 SKU, this value will be used only for the host header of the probe request. In v2 SKU, it will be used both as host header as well as SNI | | Path |Relative path of the probe. A valid path starts with '/' | | Port |If defined, this is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it is associated to. This property is only available in the v2 SKU | Interval |Probe interval in seconds. This value is the time interval between two consecutive probes | | Time-out |Probe time-out in seconds. If a valid response isn't received within this time-out period, the probe is marked as failed |
-| Unhealthy threshold |Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold |
+| Unhealthy threshold |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold |
### Probe matching
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
Title: TLS policy overview for Azure Application Gateway
-description: Learn how to configure TLS policy for Azure Application Gateway and reduce encryption and decryption overhead from a back-end server farm.
+description: Learn how to configure TLS policy for Azure Application Gateway and reduce encryption and decryption overhead from a backend server farm.
-+ Last updated 12/17/2020-+ # Application Gateway TLS policy overview
-You can use Azure Application Gateway to centralize TLS/SSL certificate management and reduce encryption and decryption overhead from a back-end server farm. This centralized TLS handling also lets you specify a central TLS policy that's suited to your organizational security requirements. This helps you meet compliance requirements as well as security guidelines and recommended practices.
+You can use Azure Application Gateway to centralize TLS/SSL certificate management and reduce encryption and decryption overhead from a backend server farm. This centralized TLS handling also lets you specify a central TLS policy that's suited to your organizational security requirements. This helps you meet compliance requirements as well as security guidelines and recommended practices.
The TLS policy includes control of the TLS protocol version as well as the cipher suites and the order in which ciphers are used during a TLS handshake. Application Gateway offers two mechanisms for controlling TLS policy. You can use either a predefined policy or a custom policy.
If a TLS policy needs to be configured for your requirements, you can use a Cust
> The newer, stronger ciphers and TLSv1.3 support are only available with the **CustomV2 policy (Preview)**. It provides enhanced security and performance benefits. > [!IMPORTANT]
-> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
+> - If you're using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
> This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
-> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ are mandatory for TLSv1.3. You need NOT mention these explicitly when setting a CustomV2 policy with minimum protocol version 1.2 or 1.3 through [PowerShell](application-gateway-configure-ssl-policy-powershell.md) or CLI. Accordingly, these ciphers suites will not appear in the Get Details output, with an exception of Portal.
+> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ are mandatory for TLSv1.3. You need NOT mention these explicitly when setting a CustomV2 policy with minimum protocol version 1.2 or 1.3 through [PowerShell](application-gateway-configure-ssl-policy-powershell.md) or CLI. Accordingly, these ciphers suites won't appear in the Get Details output, with an exception of Portal.
### Cipher suites
Application Gateway supports the following cipher suites from which you can choo
- The connections to backend servers are always with minimum protocol TLS v1.0 and up to TLS v1.2. Therefore, only TLS versions 1.0, 1.1 and 1.2 are supported to establish a secured connection with backend servers. - As of now, the TLS 1.3 implementation is not enabled with &#34;Zero Round Trip Time (0-RTT)&#34; feature.-- Application Gateway v2 does not support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
+- Application Gateway v2 doesn't support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_DHE_RSA_WITH_AES_128_CBC_SHA - TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
Learn how to troubleshoot bad gateway (502) errors received when using Azure App
After you configure an application gateway, one of the errors that you may see is **Server Error: 502 - Web server received an invalid response while acting as a gateway or proxy server**. This error may happen for the following main reasons: * NSG, UDR, or Custom DNS is blocking access to backend pool members.
-* Back-end VMs or instances of virtual machine scale set aren't responding to the default health probe.
+* Backend VMs or instances of virtual machine scale set aren't responding to the default health probe.
* Invalid or improper configuration of custom health probes.
-* Azure Application Gateway's [back-end pool isn't configured or empty](#empty-backendaddresspool).
+* Azure Application Gateway's [backend pool isn't configured or empty](#empty-backendaddresspool).
* None of the VMs or instances in [virtual machine scale set are healthy](#unhealthy-instances-in-backendaddresspool). * [Request time-out or connectivity issues](#request-time-out) with user requests.
Validate NSG, UDR, and DNS configuration by going through the following steps:
### Cause
-502 errors can also be frequent indicators that the default health probe can't reach back-end VMs.
+502 errors can also be frequent indicators that the default health probe can't reach backend VMs.
When an application gateway instance is provisioned, it automatically configures a default health probe to each BackendAddressPool using properties of the BackendHttpSetting. No user input is required to set this probe. Specifically, when a load-balancing rule is configured, an association is made between a BackendHttpSetting and a BackendAddressPool. A default probe is configured for each of these associations and the application gateway starts a periodic health check connection to each instance in the BackendAddressPool at the port specified in the BackendHttpSetting element.
The following table lists the values associated with the default health probe:
| Probe URL |`http://127.0.0.1/` |URL path | | Interval |30 |Probe interval in seconds | | Time-out |30 |Probe time-out in seconds |
-| Unhealthy threshold |3 |Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
+| Unhealthy threshold |3 |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
### Solution
The following table lists the values associated with the default health probe:
### Cause
-Custom health probes allow additional flexibility to the default probing behavior. When you use custom probes, you can configure the probe interval, the URL, the path to test, and how many failed responses to accept before marking the back-end pool instance as unhealthy.
+Custom health probes allow additional flexibility to the default probing behavior. When you use custom probes, you can configure the probe interval, the URL, the path to test, and how many failed responses to accept before marking the backend pool instance as unhealthy.
The following additional properties are added: | Probe property | Description | | | |
-| Name |Name of the probe. This name is used to refer to the probe in back-end HTTP settings. |
-| Protocol |Protocol used to send the probe. The probe uses the protocol defined in the back-end HTTP settings |
+| Name |Name of the probe. This name is used to refer to the probe in backend HTTP settings. |
+| Protocol |Protocol used to send the probe. The probe uses the protocol defined in the backend HTTP settings |
| Host |Host name to send the probe. Applicable only when multi-site is configured on the application gateway. This is different from VM host name. | | Path |Relative path of the probe. The valid path starts from '/'. The probe is sent to \<protocol\>://\<host\>:\<port\>\<path\> | | Interval |Probe interval in seconds. This is the time interval between two consecutive probes. | | Time-out |Probe time-out in seconds. If a valid response isn't received within this time-out period, the probe is marked as failed. |
-| Unhealthy threshold |Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
+| Unhealthy threshold |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
### Solution
Validate that the Custom Health Probe is configured correctly as the preceding t
### Cause
-When a user request is received, the application gateway applies the configured rules to the request and routes it to a back-end pool instance. It waits for a configurable interval of time for a response from the back-end instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway doesn't receive a response from back-end application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway doesn't receive a response from the back-end application in this interval, the request will be tried against a second back-end pool member. If the second request fails the user request gets a 502 error.
+When a user request is received, the application gateway applies the configured rules to the request and routes it to a backend pool instance. It waits for a configurable interval of time for a response from the backend instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway doesn't receive a response from backend application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway doesn't receive a response from the backend application in this interval, the request will be tried against a second backend pool member. If the second request fails the user request gets a 502 error.
### Solution
-Application Gateway allows you to configure this setting via the BackendHttpSetting, which can be then applied to different pools. Different back-end pools can have different BackendHttpSetting, and a different request time-out configured.
+Application Gateway allows you to configure this setting via the BackendHttpSetting, which can be then applied to different pools. Different backend pools can have different BackendHttpSetting, and a different request time-out configured.
```azurepowershell New-AzApplicationGatewayBackendHttpSettings -Name 'Setting01' -Port 80 -Protocol Http -CookieBasedAffinity Enabled -RequestTimeout 60
Application Gateway allows you to configure this setting via the BackendHttpSett
### Cause
-If the application gateway has no VMs or virtual machine scale set configured in the back-end address pool, it can't route any customer request and sends a bad gateway error.
+If the application gateway has no VMs or virtual machine scale set configured in the backend address pool, it can't route any customer request and sends a bad gateway error.
### Solution
-Ensure that the back-end address pool isn't empty. This can be done either via PowerShell, CLI, or portal.
+Ensure that the backend address pool isn't empty. This can be done either via PowerShell, CLI, or portal.
```azurepowershell Get-AzApplicationGateway -Name "SampleGateway" -ResourceGroupName "ExampleResourceGroup" ```
-The output from the preceding cmdlet should contain non-empty back-end address pool. The following example shows two pools returned which are configured with an FQDN or an IP addresses for the backend VMs. The provisioning state of the BackendAddressPool must be 'Succeeded'.
+The output from the preceding cmdlet should contain non-empty backend address pool. The following example shows two pools returned which are configured with an FQDN or an IP addresses for the backend VMs. The provisioning state of the BackendAddressPool must be 'Succeeded'.
BackendAddressPoolsText:
BackendAddressPoolsText:
### Cause
-If all the instances of BackendAddressPool are unhealthy, then the application gateway doesn't have any back-end to route user request to. This can also be the case when back-end instances are healthy but don't have the required application deployed.
+If all the instances of BackendAddressPool are unhealthy, then the application gateway doesn't have any backend to route user request to. This can also be the case when backend instances are healthy but don't have the required application deployed.
### Solution
-Ensure that the instances are healthy and the application is properly configured. Check if the back-end instances can respond to a ping from another VM in the same VNet. If configured with a public end point, ensure a browser request to the web application is serviceable.
+Ensure that the instances are healthy and the application is properly configured. Check if the backend instances can respond to a ping from another VM in the same VNet. If configured with a public end point, ensure a browser request to the web application is serviceable.
## Next steps
application-gateway Application Gateway Websocket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-websocket.md
Title: WebSocket support in Azure Application Gateway description: Application Gateway provides native support for WebSocket across all gateway sizes. There are no user-configurable settings. -+
To establish a WebSocket connection, a specific HTTP-based handshake is exchange
![Diagram compares a client interacting with a web server, connecting twice to get two replies, with a WebSocket interaction, where a client connects to a server once to get multiple replies.](./media/application-gateway-websocket/websocket.png) > [!NOTE]
-> As described, the HTTP protocol is used only to perform a handshake when establishing a WebSocket connection. Once the handshake is completed, a WebSocket connection gets opened for transmitting the data, and the Web Application Firewall (WAF) cannot parse any contents. Therefore, WAF does not perform any inspections on such data.
+> As described, the HTTP protocol is used only to perform a handshake when establishing a WebSocket connection. Once the handshake is completed, a WebSocket connection gets opened for transmitting the data, and the Web Application Firewall (WAF) cannot parse any contents. Therefore, WAF doesn't perform any inspections on such data.
### Listener configuration element
Your backend must have a HTTP/HTTPS web server running on the configured port (u
Sec-WebSocket-Version: 13 ```
-Another reason for this is that application gateway backend health probe supports HTTP and HTTPS protocols only. If the backend server does not respond to HTTP or HTTPS probes, it is taken out of backend pool.
+Another reason for this is that application gateway backend health probe supports HTTP and HTTPS protocols only. If the backend server doesn't respond to HTTP or HTTPS probes, it is taken out of backend pool.
## Next steps
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
+
+ Title: Azure Application Gateway frontend IP address configuration
+description: This article describes how to configure the Azure Application Gateway frontend IP address.
++++ Last updated : 09/09/2020+++
+# Application Gateway frontend IP address configuration
+
+You can configure the application gateway to have a public IP address, a private IP address, or both. A public IP address is required when you host a back end that clients must access over the Internet via an Internet-facing virtual IP (VIP).
+
+## Public and private IP address support
+
+Application Gateway V2 currently doesn't support only private IP mode. It supports the following combinations:
+
+* Private IP address and public IP address
+* Public IP address only
+
+For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address).
++
+A public IP address isn't required for an internal endpoint that's not exposed to the Internet. That's known as an *internal load-balancer* (ILB) endpoint or private frontend IP. An application gateway ILB is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
+
+Only one public IP address and one private IP address is supported. You choose the frontend IP when you create the application gateway.
+
+- For a public IP address, you can create a new public IP address or use an existing public IP in the same location as the application gateway. For more information, see [static vs. dynamic public IP address](./application-gateway-components.md#static-versus-dynamic-public-ip-address).
+
+- For a private IP address, you can specify a private IP address from the subnet where the application gateway is created. For Application Gateway v2 sku deployments, a static IP address must be defined when adding a private IP address to the gateway. For Application Gateway v1 sku deployments, if you don't specify an IP address, an available IP address is automatically selected from the subnet. The IP address type that you select (static or dynamic) can't be changed later. For more information, see [Create an application gateway with an internal load balancer](./application-gateway-ilb-arm.md).
+
+A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP.
+
+## Next steps
+
+- [Learn about listener configuration](configuration-listeners.md)
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
# Application Gateway HTTP settings configuration
-The application gateway routes traffic to the back-end servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
+The application gateway routes traffic to the backend servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
## Cookie-based affinity
Azure Application Gateway uses gateway-managed cookies for maintaining user sess
This feature is useful when you want to keep a user session on the same server and when session state is saved locally on the server for a user session. If the application can't handle cookie-based affinity, you can't use this feature. To use it, make sure that the clients support cookies. > [!NOTE]
-> Some vulnerability scans may flag the Application Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
+> Some vulnerability scans may flag the Application Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie doesn't contain any user information and is used purely for routing.
-The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-03#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
+The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-03#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. For CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky sessions are maintained even for cross-origin requests.
Please refer to TLS offload and End-to-End TLS documentation for Application Gat
## Connection draining
-Connection draining helps you gracefully remove back-end pool members during planned service updates. You can apply this setting to all members of a back-end pool by enabling connection draining on the HTTP setting. It ensures that all deregistering instances of a back-end pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections. The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. Connection draining applies to back-end instances that are explicitly removed from the back-end pool.
+Connection draining helps you gracefully remove backend pool members during planned service updates. You can apply this setting to all members of a backend pool by enabling connection draining on the HTTP setting. It ensures that all deregistering instances of a backend pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections. The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. Connection draining applies to backend instances that are explicitly removed from the backend pool.
## Protocol
-Application Gateway supports both HTTP and HTTPS for routing requests to the back-end servers. If you choose HTTP, traffic to the back-end servers is unencrypted. If unencrypted communication isn't acceptable, choose HTTPS.
+Application Gateway supports both HTTP and HTTPS for routing requests to the backend servers. If you choose HTTP, traffic to the backend servers is unencrypted. If unencrypted communication isn't acceptable, choose HTTPS.
-This setting combined with HTTPS in the listener supports [end-to-end TLS](ssl-overview.md). This allows you to securely transmit sensitive data encrypted to the back end. Each back-end server in the back-end pool that has end-to-end TLS enabled must be configured with a certificate to allow secure communication.
+This setting combined with HTTPS in the listener supports [end-to-end TLS](ssl-overview.md). This allows you to securely transmit sensitive data encrypted to the back end. Each backend server in the backend pool that has end-to-end TLS enabled must be configured with a certificate to allow secure communication.
## Port
-This setting specifies the port where the back-end servers listen to traffic from the application gateway. You can configure ports ranging from 1 to 65535.
+This setting specifies the port where the backend servers listen to traffic from the application gateway. You can configure ports ranging from 1 to 65535.
## Trusted root certificate
-If you select HTTPS as the back-end protocol, the Application Gateway requires a trusted root certificate to trust the back-end pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the back-end pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format.
+If you select HTTPS as the backend protocol, the Application Gateway requires a trusted root certificate to trust the backend pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the backend pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format.
-If you plan to use a certificate on the back-end pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate.
+If you plan to use a certificate on the backend pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate.
## Request timeout
-This setting is the number of seconds that the application gateway waits to receive a response from the back-end server.
+This setting is the number of seconds that the application gateway waits to receive a response from the backend server.
-## Override back-end path
+## Override backend path
This setting lets you configure an optional custom forwarding path to use when the request is forwarded to the back end. Any part of the incoming path that matches the custom path in the **override backend path** field is copied to the forwarded path. The following table shows how this feature works: - When the HTTP setting is attached to a basic request-routing rule:
- | Original request | Override back-end path | Request forwarded to back end |
+ | Original request | Override backend path | Request forwarded to back end |
| -- | | - | | /home/ | /override/ | /override/home/ | | /home/secondhome/ | /override/ | /override/home/secondhome/ | - When the HTTP setting is attached to a path-based request-routing rule:
- | Original request | Path rule | Override back-end path | Request forwarded to back end |
+ | Original request | Path rule | Override backend path | Request forwarded to back end |
| -- | | | - | | /pathrule/home/ | /pathrule* | /override/ | /override/home/ | | /pathrule/home/secondhome/ | /pathrule* | /override/ | /override/home/secondhome/ |
This setting lets you configure an optional custom forwarding path to use when t
This setting associates a [custom probe](application-gateway-probe-overview.md#custom-health-probe) with an HTTP setting. You can associate only one custom probe with an HTTP setting. If you don't explicitly associate a custom probe, the [default probe](application-gateway-probe-overview.md#default-health-probe-settings) is used to monitor the health of the back end. We recommend that you create a custom probe for greater control over the health monitoring of your back ends. > [!NOTE]
-> The custom probe doesn't monitor the health of the back-end pool unless the corresponding HTTP setting is explicitly associated with a listener.
+> The custom probe doesn't monitor the health of the backend pool unless the corresponding HTTP setting is explicitly associated with a listener.
## Configuring the host name
There are two aspects of an HTTP setting that influence the [`Host`](https://dat
- "Pick host name from backend-address" - "Host name override"
-## Pick host name from back-end address
+## Pick host name from backend address
-This capability dynamically sets the *host* header in the request to the host name of the back-end pool. It uses an IP address or FQDN.
+This capability dynamically sets the *host* header in the request to the host name of the backend pool. It uses an IP address or FQDN.
This feature helps when the domain name of the back end is different from the DNS name of the application gateway, and the back end relies on a specific host header to resolve to the correct endpoint.
For a custom domain whose existing custom DNS name is mapped to the app service,
This capability replaces the *host* header in the incoming request on the application gateway with the host name that you specify.
-For example, if *www.contoso.com* is specified in the **Host name** setting, the original request *`https://appgw.eastus.cloudapp.azure.com/path1` is changed to *`https://www.contoso.com/path1` when the request is forwarded to the back-end server.
+For example, if *www.contoso.com* is specified in the **Host name** setting, the original request *`https://appgw.eastus.cloudapp.azure.com/path1` is changed to *`https://www.contoso.com/path1` when the request is forwarded to the backend server.
## Next steps -- [Learn about the back-end pool](configuration-overview.md#back-end-pool)
+- [Learn about the backend pool](configuration-overview.md#backend-pool)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
An application gateway is a dedicated deployment in your virtual network. Within
### Size of the subnet
-Application Gateway uses one private IP address per instance, plus another private IP address if a private front-end IP is configured.
+Application Gateway uses one private IP address per instance, plus another private IP address if a private frontend IP is configured.
-Azure also reserves five IP addresses in each subnet for internal use: the first four and the last IP addresses. For example, consider 15 application gateway instances with no private front-end IP. You need at least 20 IP addresses for this subnet: five for internal use and 15 for the application gateway instances.
+Azure also reserves five IP addresses in each subnet for internal use: the first four and the last IP addresses. For example, consider 15 application gateway instances with no private frontend IP. You need at least 20 IP addresses for this subnet: five for internal use and 15 for the application gateway instances.
-Consider a subnet that has 27 application gateway instances and an IP address for a private front-end IP. In this case, you need 33 IP addresses: 27 for the application gateway instances, one for the private front end, and five for internal use.
+Consider a subnet that has 27 application gateway instances and an IP address for a private frontend IP. In this case, you need 33 IP addresses: 27 for the application gateway instances, one for the private front end, and five for internal use.
Application Gateway (Standard or WAF) SKU can support up to 32 instances (32 instance IP addresses + 1 private frontend IP configuration + 5 Azure reserved) ΓÇô so a minimum subnet size of /26 is recommended
Network security groups (NSGs) are supported on Application Gateway. But there a
For this scenario, use NSGs on the Application Gateway subnet. Put the following restrictions on the subnet in this order of priority: 1. Allow incoming traffic from a source IP or IP range with the destination as the entire Application Gateway subnet address range and destination port as your inbound access port, for example, port 80 for HTTP access.
-2. Allow incoming requests from source as **GatewayManager** service tag and destination as **Any** and destination ports as 65503-65534 for the Application Gateway v1 SKU, and ports 65200-65535 for v2 SKU for [back-end health status communication](./application-gateway-diagnostics.md). This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. Without appropriate certificates in place, external entities can't initiate changes on those endpoints.
+2. Allow incoming requests from source as **GatewayManager** service tag and destination as **Any** and destination ports as 65503-65534 for the Application Gateway v1 SKU, and ports 65200-65535 for v2 SKU for [backend health status communication](./application-gateway-diagnostics.md). This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. Without appropriate certificates in place, external entities can't initiate changes on those endpoints.
3. Allow incoming Azure Load Balancer probes (*AzureLoadBalancer* tag) on the [network security group](../virtual-network/network-security-groups-overview.md). 4. Allow expected inbound traffic to match your listener configuration (i.e. if you have listeners configured for port 80, you will want an allow inbound rule for port 80) 5. Block all other incoming traffic by using a deny-all rule.
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Supported user-defined routes > [!IMPORTANT]
-> Using UDRs on the Application Gateway subnet might cause the health status in the [back-end health view](./application-gateway-diagnostics.md#back-end-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the back-end health, logs, and metrics.
+> Using UDRs on the Application Gateway subnet might cause the health status in the [backend health view](./application-gateway-diagnostics.md#backend-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the backend health, logs, and metrics.
- **v1**
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Next steps -- [Learn about front-end IP address configuration](configuration-front-end-ip.md).
+- [Learn about frontend IP address configuration](configuration-frontend-ip.md).
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Last updated 09/09/2020-+
For the v1 SKU, requests are matched according to the order of the rules and the
For the v2 SKU, multi-site listeners are processed before basic listeners.
-## Front-end IP address
+## Frontend IP address
-Choose the front-end IP address that you plan to associate with this listener. The listener will listen to incoming requests on this IP.
+Choose the frontend IP address that you plan to associate with this listener. The listener will listen to incoming requests on this IP.
-## Front-end port
+## Frontend port
-Choose the front-end port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners.
+Choose the frontend port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners.
## Protocol
Choose HTTP or HTTPS:
- If you choose HTTP, the traffic between the client and the application gateway is unencrypted. -- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted. And the TLS connection terminates at the application gateway. If you want end-to-end TLS encryption, you must choose HTTPS and configure the **back-end HTTP** setting. This ensures that traffic is re-encrypted when it travels from the application gateway to the back end.
+- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted. And the TLS connection terminates at the application gateway. If you want end-to-end TLS encryption, you must choose HTTPS and configure the **backend HTTP** setting. This ensures that traffic is re-encrypted when it travels from the application gateway to the back end.
To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys.
See [Overview of TLS termination and end to end TLS with Application Gateway](ss
### HTTP2 support
-HTTP/2 protocol support is available to clients that connect to application gateway listeners only. The communication to back-end server pools is over HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
+HTTP/2 protocol support is available to clients that connect to application gateway listeners only. The communication to backend server pools is over HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
```azurepowershell $gw = Get-AzApplicationGateway -Name test -ResourceGroupName hm
To configure a global custom error page, see [Azure PowerShell configuration](./
## TLS policy
-You can centralize TLS/SSL certificate management and reduce encryption-decryption overhead for a back-end server farm. Centralized TLS handling also lets you specify a central TLS policy that's suited to your security requirements. You can choose *default*, *predefined*, or *custom* TLS policy.
+You can centralize TLS/SSL certificate management and reduce encryption-decryption overhead for a backend server farm. Centralized TLS handling also lets you specify a central TLS policy that's suited to your security requirements. You can choose *default*, *predefined*, or *custom* TLS policy.
You configure TLS policy to control TLS protocol versions. You can configure an application gateway to use a minimum protocol version for TLS handshakes from TLS1.0, TLS1.1, and TLS1.2. By default, SSL 2.0 and 3.0 are disabled and aren't configurable. For more information, see [Application Gateway TLS policy overview](./application-gateway-ssl-policy-overview.md).
application-gateway Configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-overview.md
Last updated 09/09/2020-+ # Application Gateway configuration overview
For more information, see [Application Gateway infrastructure configuration](con
-## Front-end IP address
+## Frontend IP address
You can configure the application gateway to have a public IP address, a private IP address, or both. A public IP is required when you host a back end that clients must access over the Internet via an Internet-facing virtual IP (VIP).
-For more information, see [Application Gateway front-end IP address configuration](configuration-front-end-ip.md).
+For more information, see [Application Gateway frontend IP address configuration](configuration-frontend-ip.md).
## Listeners
For more information, see [Application Gateway listener configuration](configura
## Request routing rules
-When you create an application gateway by using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default back-end pool (*appGatewayBackendPool*) and the default back-end HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
+When you create an application gateway by using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default backend pool (*appGatewayBackendPool*) and the default backend HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
For more information, see [Application Gateway request routing rules](configuration-request-routing-rules.md). ## HTTP settings
-The application gateway routes traffic to the back-end servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
+The application gateway routes traffic to the backend servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
For more information, see [Application Gateway HTTP settings configuration](configuration-http-settings.md).
-## Back-end pool
+## Backend pool
-You can point a back-end pool to four types of backend members: a specific virtual machine, a virtual machine scale set, an IP address/FQDN, or an app service.
+You can point a backend pool to four types of backend members: a specific virtual machine, a virtual machine scale set, an IP address/FQDN, or an app service.
-After you create a back-end pool, you must associate it with one or more request-routing rules. You must also configure health probes for each back-end pool on your application gateway. When a request-routing rule condition is met, the application gateway forwards the traffic to the healthy servers (as determined by the health probes) in the corresponding back-end pool.
+After you create a backend pool, you must associate it with one or more request-routing rules. You must also configure health probes for each backend pool on your application gateway. When a request-routing rule condition is met, the application gateway forwards the traffic to the healthy servers (as determined by the health probes) in the corresponding backend pool.
## Health probes
-An application gateway monitors the health of all resources in its back end by default. But we strongly recommend that you create a custom probe for each back-end HTTP setting to get greater control over health monitoring. To learn how to configure a custom probe, see [Custom health probe settings](application-gateway-probe-overview.md#custom-health-probe-settings).
+An application gateway monitors the health of all resources in its back end by default. But we strongly recommend that you create a custom probe for each backend HTTP setting to get greater control over health monitoring. To learn how to configure a custom probe, see [Custom health probe settings](application-gateway-probe-overview.md#custom-health-probe-settings).
> [!NOTE]
-> After you create a custom health probe, you need to associate it to a back-end HTTP setting. A custom probe won't monitor the health of the back-end pool unless the corresponding HTTP setting is explicitly associated with a listener using a rule.
+> After you create a custom health probe, you need to associate it to a backend HTTP setting. A custom probe won't monitor the health of the backend pool unless the corresponding HTTP setting is explicitly associated with a listener using a rule.
## Next steps
application-gateway Configuration Request Routing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-request-routing-rules.md
Last updated 09/09/2020-+ # Application Gateway request routing rules
-When you create an application gateway using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default back-end pool (*appGatewayBackendPool*) and the default back-end HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
+When you create an application gateway using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default backend pool (*appGatewayBackendPool*) and the default backend HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
## Rule type When you create a rule, you choose between [*basic* and *path-based*](./application-gateway-components.md#request-routing-rules). -- Choose basic if you want to forward all requests on the associated listener (for example, *blog<i></i>.contoso.com/\*)* to a single back-end pool.-- Choose path-based if you want to route requests from specific URL paths to specific back-end pools. The path pattern is applied only to the path of the URL, not to its query parameters.
+- Choose basic if you want to forward all requests on the associated listener (for example, *blog<i></i>.contoso.com/\*)* to a single backend pool.
+- Choose path-based if you want to route requests from specific URL paths to specific backend pools. The path pattern is applied only to the path of the URL, not to its query parameters.
### Order of processing rules
For the v1 and v2 SKU, pattern matching of incoming requests is processed in the
## Associated listener
-Associate a listener to the rule so that the *request-routing rule* that's associated with the listener is evaluated to determine the back-end pool to route the request to.
+Associate a listener to the rule so that the *request-routing rule* that's associated with the listener is evaluated to determine the backend pool to route the request to.
-## Associated back-end pool
+## Associated backend pool
-Associate to the rule the back-end pool that contains the back-end targets that serve requests that the listener receives.
+Associate to the rule the backend pool that contains the backend targets that serve requests that the listener receives.
+ - For a basic rule, only one backend pool is allowed. All requests on the associated listener are forwarded to that backend pool.
+ - For a path-based rule, add multiple backend pools that correspond to each URL path. The requests that match the URL path that's entered are forwarded to the corresponding backend pool. Also, add a default backend pool. Requests that don't match any URL path in the rule are forwarded to that pool.
-## Associated back-end HTTP setting
+## Associated backend HTTP setting
-Add a back-end HTTP setting for each rule. Requests are routed from the application gateway to the back-end targets by using the port number, protocol, and other information that's specified in this setting.
+Add a backend HTTP setting for each rule. Requests are routed from the application gateway to the backend targets by using the port number, protocol, and other information that's specified in this setting.
-For a basic rule, only one back-end HTTP setting is allowed. All requests on the associated listener are forwarded to the corresponding back-end targets by using this HTTP setting.
+For a basic rule, only one backend HTTP setting is allowed. All requests on the associated listener are forwarded to the corresponding backend targets by using this HTTP setting.
-For a path-based rule, add multiple back-end HTTP settings that correspond to each URL path. Requests that match the URL path in this setting are forwarded to the corresponding back-end targets by using the HTTP settings that correspond to each URL path. Also, add a default HTTP setting. Requests that don't match any URL path in this rule are forwarded to the default back-end pool by using the default HTTP setting.
+For a path-based rule, add multiple backend HTTP settings that correspond to each URL path. Requests that match the URL path in this setting are forwarded to the corresponding backend targets by using the HTTP settings that correspond to each URL path. Also, add a default HTTP setting. Requests that don't match any URL path in this rule are forwarded to the default backend pool by using the default HTTP setting.
## Redirection setting
application-gateway Configure Alerts With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md
Title: Configure Azure Monitor alerts for Application Gateway description: Learn how to use ARM templates to configure Azure Monitor alerts for Application Gateway-+
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Title: Configure an internal load balancer (ILB) endpoint
description: This article provides information on how to configure Application Gateway Standard v1 with a private frontend IP address -+ Last updated 01/11/2022
In this example, you create a new virtual network. You can create a virtual netw
## Add backend pool
-The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway.
+The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway.
To do this, you:
The client virtual machine is used to connect to the application gateway backend
## Next steps
-If you want to monitor the health of your backend pool, see [Back-end health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md).
+If you want to monitor the health of your backend pool, see [Backend health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md).
application-gateway Configure Key Vault Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-key-vault-portal.md
Title: Configure TLS termination with Key Vault certificates - Portal
description: Learn how to use an Azure portal to integrate your key vault with your application gateway for TLS/SSL termination certificates. -+ Last updated 10/01/2021
At this point, your Azure account is the only one authorized to perform operatio
:::image type="content" source="media/configure-key-vault-portal/create-key-vault-certificate.png" alt-text="Screenshot of key vault certificate creation"::: > [!Important]
-> Issuance policies only affect certificates that will be issued in the future. Modifying this issuance policy will not affect any existing certificates.
+> Issuance policies only affect certificates that will be issued in the future. Modifying this issuance policy won't affect any existing certificates.
### Create a Virtual Network
You can configure the Frontend IP to be Public or Private as per your use case.
**Backends tab**
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Configure Keyvault Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-keyvault-ps.md
Title: Configure TLS termination with Key Vault certificates - PowerShell
-description: Learn how how to use an Azure PowerShell script to integrate your key vault with your application gateway for TLS/SSL termination certificates.
+description: Learn how to use an Azure PowerShell script to integrate your key vault with your application gateway for TLS/SSL termination certificates.
$publicip = New-AzPublicIpAddress -ResourceGroupName $rgname -name "AppGwIP" `
-location $location -AllocationMethod Static -Sku Standard ```
-### Create pool and front-end ports
+### Create pool and frontend ports
```azurepowershell $gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name "appgwSubnet" -VirtualNetwork $vnet
application-gateway Configure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-web-app.md
Title: Manage traffic to App Service
description: This article provides guidance on how to configure Application Gateway with Azure App Service -+ Last updated 02/17/2022-+ <!-- markdownlint-disable MD044 --> # Configure App Service with Application Gateway
-Application gateway allows you to have an App Service app or other multi-tenant service as a back-end pool member. In this article, you learn to configure an App Service app with Application Gateway. The configuration for Application Gateway will differ depending on how App Service will be accessed:
+Application gateway allows you to have an App Service app or other multi-tenant service as a backend pool member. In this article, you learn to configure an App Service app with Application Gateway. The configuration for Application Gateway will differ depending on how App Service will be accessed:
- The first option makes use of a **custom domain** on both Application Gateway and the App Service in the backend. - The second option is to have Application Gateway access App Service using its **default domain**, suffixed as ".azurewebsites.net".
Application gateway allows you to have an App Service app or other multi-tenant
This configuration is recommended for production-grade scenarios and meets the practice of not changing the host name in the request flow. You are required to have a custom domain (and associated certificate) available to avoid having to rely on the default ".azurewebsites" domain.
-By associating the same domain name to both Application Gateway and App Service in the backend pool, the request flow does not need to override the host name. The backend web application will see the original host as was used by the client.
+By associating the same domain name to both Application Gateway and App Service in the backend pool, the request flow doesn't need to override the host name. The backend web application will see the original host as was used by the client.
:::image type="content" source="media/configure-web-app/scenario-application-gateway-to-azure-app-service-custom-domain.png" alt-text="Scenario overview for Application Gateway to App Service using the same custom domain for both"::: ## [Default domain](#tab/defaultdomain)
-This configuration is the easiest and does not require a custom domain. As such it allows for a quick convenient setup.
+This configuration is the easiest and doesn't require a custom domain. As such it allows for a quick convenient setup.
> [!WARNING] > This configuration comes with limitations. We recommend to review the implications of using different host names between the client and Application Gateway and between Application and App Service in the backend. For more information, please review the article in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation)
-When App Service does not have a custom domain associated with it, the host header on the incoming request on the web application will need to be set to the default domain, suffixed with ".azurewebsites.net" or else the platform will not be able to properly route the request.
+When App Service doesn't have a custom domain associated with it, the host header on the incoming request on the web application will need to be set to the default domain, suffixed with ".azurewebsites.net" or else the platform won't be able to properly route the request.
The host header in the original request received by the Application Gateway will be different from the host name of the backend App Service.
We will connect to the backend using HTTPS.
1. Under **HTTP Settings**, select an existing HTTP setting or add a new one. 2. When creating a new HTTP Setting, give it a name 3. Select HTTPS as the desired backend protocol using port 443
-4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of back-end servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers)
+4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of backend servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-backend-servers)
5. Make sure to set "Override with new host name" to "No" 6. Select the custom HTTPS health probe in the dropdown for "Custom probe". > [!Note]
An HTTP Setting is required that instructs Application Gateway to access the App
1. Under **HTTP Settings**, select an existing HTTP setting or add a new one. 2. When creating a new HTTP Setting, give it a name 3. Select HTTPS as the desired backend protocol using port 443
-4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of back-end servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers)
+4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of backend servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-backend-servers)
5. Make sure to set "Override with new host name" to "Yes" 6. Under "Host name override", select "Pick host name from backend target". This setting will cause the request towards App Service to use the "azurewebsites.net" host name, as is configured in the Backend Pool.
if ($listener -eq $null){
## Configure request routing rule
-Provided with the earlier configured Backend Pool and the HTTP Settings, the request routing rule can be set up to take traffic from a listener and route it to the Backend Pool using the HTTP Settings. For this, make sure you have an HTTP or HTTPS listener available that is not already bound to an existing routing rule.
+Using the earlier configured Backend Pool and the HTTP Settings, the request routing rule can be set up to take traffic from a listener and route it to the Backend Pool using the HTTP Settings. For this, make sure you have an HTTP or HTTPS listener available that is not already bound to an existing routing rule.
### [Azure portal](#tab/azure-portal)
Pay attention to the following non-exhaustive list of potential symptoms when te
- domain-bound cookies not being passed on to the backend - this includes the use of the ["ARR affinity" setting](../app-service/configure-common.md#configure-general-settings) in App Service
-The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application does not deal well with rewriting the host name. This is very common to see. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
+The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application doesn't deal well with rewriting the host name. This is commonly seen. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
### [Azure portal](#tab/azure-portal/customdomain)
Pay attention to the following non-exhaustive list of potential symptoms when te
- domain-bound cookies not being passed on to the backend - this includes the use of the ["ARR affinity" setting](../app-service/configure-common.md#configure-general-settings) in App Service
-The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application does not deal well with rewriting the host name. This is very common to see. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
+The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application doesn't deal well with rewriting the host name. This is commonly seen. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
## Restrict access
-The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you are learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions. Consider the following options:
+The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you're learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions. Consider the following options:
- Configure [Access restriction rules based on service endpoints](../app-service/overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints). This allows you to lock down inbound access to the app making sure the source address is from Application Gateway. - Use [Azure App Service static IP restrictions](../app-service/app-service-ip-restrictions.md). For example, you can restrict the web app so that it only receives traffic from the application gateway. Use the app service IP restriction feature to list the application gateway VIP as the only address with access.
application-gateway Create Gateway Internal Load Balancer App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-gateway-internal-load-balancer-app-service-environment.md
Last updated 06/10/2022
-# Back-end server certificate isn't allow-listed for an application gateway using an Internal Load Balancer with an App Service Environment
+# Backend server certificate isn't allow-listed for an application gateway using an Internal Load Balancer with an App Service Environment
This article troubleshoots the following issue: A certificate isn't allow-listed when you create an application gateway by using an Internal Load Balancer (ILB) together with an App Service Environment (ASE) at the back end when using end-to-end TLS in Azure. ## Symptoms
-When you create an application gateway by using an ILB with an ASE at the back end, the back-end server may become unhealthy. This problem occurs if the authentication certificate of the application gateway doesn't match the configured certificate on the back-end server. See the following scenario as an example:
+When you create an application gateway by using an ILB with an ASE at the back end, the backend server may become unhealthy. This problem occurs if the authentication certificate of the application gateway doesn't match the configured certificate on the backend server. See the following scenario as an example:
**Application Gateway configuration:**
When you create an application gateway by using an ILB with an ASE at the back e
- **App Service:** test.appgwtestase.com - **SSL Binding:** SNI SSL ΓÇô CN=test.appgwtestase.com
-When you access the application gateway, you receive the following error message because the back-end server is unhealthy:
+When you access the application gateway, you receive the following error message because the backend server is unhealthy:
**502 ΓÇô Web server received an invalid response while acting as a gateway or proxy server.** ## Solution
-When you don't use a host name to access an HTTPS website, the back-end server will return the configured certificate on the default website, in case SNI is disabled. For an ILB ASE, the default certificate comes from the ILB certificate. If there are no configured certificates for the ILB, the certificate comes from the ASE App certificate.
+When you don't use a host name to access an HTTPS website, the backend server will return the configured certificate on the default website, in case SNI is disabled. For an ILB ASE, the default certificate comes from the ILB certificate. If there are no configured certificates for the ILB, the certificate comes from the ASE App certificate.
-When you use a fully qualified domain name (FQDN) to access the ILB, the back-end server will return the correct certificate that's uploaded in the HTTP settings. If that isn't the case , consider the following options:
+When you use a fully qualified domain name (FQDN) to access the ILB, the backend server will return the correct certificate that's uploaded in the HTTP settings. If that isn't the case , consider the following options:
-- Use FQDN in the back-end pool of the application gateway to point to the IP address of the ILB. This option only works if you have a private DNS zone or a custom DNS configured. Otherwise, you have to create an "A" record for a public DNS.
+- Use FQDN in the backend pool of the application gateway to point to the IP address of the ILB. This option only works if you have a private DNS zone or a custom DNS configured. Otherwise, you have to create an "A" record for a public DNS.
- Use the uploaded certificate on the ILB or the default certificate (ILB certificate) in the HTTP settings. The application gateway gets the certificate when it accesses the ILB's IP for the probe. -- Use a wildcard certificate on the ILB and the back-end server, so that for all the websites, the certificate is common. However, this solution is possible only for subdomains and not if each of the websites require different hostnames.
+- Use a wildcard certificate on the ILB and the backend server, so that for all the websites, the certificate is common. However, this solution is possible only for subdomains and not if each website requires different hostnames.
- Clear the **Use for App service** option for the application gateway in case you're using the IP address of the ILB.
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
In this example, you create three virtual machines to be used as backend servers
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
For more information, see [Add-AzApplicationGatewayCustomError](/powershell/modu
## Next steps
-For information about Application Gateway diagnostics, see [Back-end health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md).
+For information about Application Gateway diagnostics, see [Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md).
application-gateway Disabled Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/disabled-listeners.md
Title: Understanding disabled listeners description: The article explains the details of a disabled listener and ways to resolve the problem.-+ Last updated 02/22/2022-+
The SSL/TLS certificates for Azure Application GatewayΓÇÖs listeners can be referenced from a customerΓÇÖs Key Vault resource. Your application gateway must always have access to such linked key vault resource and its certificate object to ensure smooth operations of the TLS termination feature and the overall health of the gateway resource.
-It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only in the case of configuration errors. Transient connectivity problems do not have any impact on the listeners.
+It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only for configuration errors. Transient connectivity problems do not have any impact on the listeners.
A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which PFX certificate file is directly uploaded on Application Gateway resource will never go in a disabled state.
application-gateway End To End Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/end-to-end-ssl-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Before you begin
-To configure end-to-end TLS with an application gateway, you need a certificate for the gateway. Certificates are also required for the back-end servers. The gateway certificate is used to derive a symmetric key in compliance with the TLS protocol specification. The symmetric key is then used to encrypt and decrypt the traffic sent to the gateway.
+To configure end-to-end TLS with an application gateway, you need a certificate for the gateway. Certificates are also required for the backend servers. The gateway certificate is used to derive a symmetric key in compliance with the TLS protocol specification. The symmetric key is then used to encrypt and decrypt the traffic sent to the gateway.
-For end-to-end TLS encryption, the right back-end servers must be allowed in the application gateway. To allow this access, upload the public certificate of the back-end servers, also known as Authentication Certificates (v1) or Trusted Root Certificates (v2), to the application gateway. Adding the certificate ensures that the application gateway communicates only with known back-end instances. This configuration further secures end-to-end communication.
+For end-to-end TLS encryption, the right backend servers must be allowed in the application gateway. To allow this access, upload the public certificate of the backend servers, also known as Authentication Certificates (v1) or Trusted Root Certificates (v2), to the application gateway. Adding the certificate ensures that the application gateway communicates only with known backend instances. This configuration further secures end-to-end communication.
To learn more, see [Overview of TLS termination and end to end TLS with Application Gateway](./ssl-overview.md). ## Create a new application gateway with end-to-end TLS
-To create a new application gateway with end-to-end TLS encryption, you'll need to first enable TLS termination while creating a new application gateway. This action enables TLS encryption for communication between the client and application gateway. Then, you'll need to put on the Safe Recipients list the certificates for the back-end servers in the HTTP settings. This configuration enables TLS encryption for communication between the application gateway and the back-end servers. That accomplishes end-to-end TLS encryption.
+To create a new application gateway with end-to-end TLS encryption, you'll need to first enable TLS termination while creating a new application gateway. This action enables TLS encryption for communication between the client and application gateway. Then, you'll need to put on the Safe Recipients list the certificates for the backend servers in the HTTP settings. This configuration enables TLS encryption for communication between the application gateway and the backend servers. That accomplishes end-to-end TLS encryption.
### Enable TLS termination while creating a new application gateway To learn more, see [enable TLS termination while creating a new application gateway](./create-ssl-portal.md).
-### Add authentication/root certificates of back-end servers
+### Add authentication/root certificates of backend servers
1. Select **All resources**, and then select **myAppGateway**.
To learn more, see [enable TLS termination while creating a new application gate
7. Select the certificate file in the **Upload CER certificate** box.
- For Standard and WAF (v1) application gateways, you should upload the public key of your back-end server certificate in .cer format.
+ For Standard and WAF (v1) application gateways, you should upload the public key of your backend server certificate in .cer format.
![Add certificate](./media/end-to-end-ssl-portal/addcert.png)
- For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the back-end server certificate in .cer format. If the back-end certificate is issued by a well-known certificate authority (CA), you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
+ For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the backend server certificate in .cer format. If the backend certificate is issued by a well-known certificate authority (CA), you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
![Add trusted root certificate](./media/end-to-end-ssl-portal/trustedrootcert-portal.png)
To learn more, see [enable TLS termination while creating a new application gate
## Enable end-to-end TLS for an existing application gateway
-To configure an existing application gateway with end-to-end TLS encryption, you must first enable TLS termination in the listener. This action enables TLS encryption for communication between the client and the application gateway. Then, put those certificates for back-end servers in the HTTP settings on the Safe Recipients list. This configuration enables TLS encryption for communication between the application gateway and the back-end servers. That accomplishes end-to-end TLS encryption.
+To configure an existing application gateway with end-to-end TLS encryption, you must first enable TLS termination in the listener. This action enables TLS encryption for communication between the client and the application gateway. Then, put those certificates for backend servers in the HTTP settings on the Safe Recipients list. This configuration enables TLS encryption for communication between the application gateway and the backend servers. That accomplishes end-to-end TLS encryption.
You'll need to use a listener with the HTTPS protocol and a certificate for enabling TLS termination. You can either use an existing listener that meets those conditions or create a new listener. If you choose the former option, you can ignore the following "Enable TLS termination in an existing application gateway" section and move directly to the "Add authentication/trusted root certificates for backend servers" section.
If you choose the latter option, apply the steps in the following procedure.
7. Select **OK** to save.
-### Add authentication/trusted root certificates of back-end servers
+### Add authentication/trusted root certificates of backend servers
1. Select **All resources**, and then select **myAppGateway**.
-2. Select **HTTP settings** from the left-side menu. You can either put certificates in an existing back-end HTTP setting on the Safe Recipients list or create a new HTTP setting. (In the next step, the certificate for the default HTTP setting, **appGatewayBackendHttpSettings**, is added to the Safe Recipients list.)
+2. Select **HTTP settings** from the left-side menu. You can either put certificates in an existing backend HTTP setting on the Safe Recipients list or create a new HTTP setting. (In the next step, the certificate for the default HTTP setting, **appGatewayBackendHttpSettings**, is added to the Safe Recipients list.)
3. Select **appGatewayBackendHttpSettings**.
If you choose the latter option, apply the steps in the following procedure.
7. Select the certificate file in the **Upload CER certificate** box.
- For Standard and WAF (v1) application gateways, you should upload the public key of your back-end server certificate in .cer format.
+ For Standard and WAF (v1) application gateways, you should upload the public key of your backend server certificate in .cer format.
![Add certificate](./media/end-to-end-ssl-portal/addcert.png)
- For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the back-end server certificate in .cer format. If the back-end certificate is issued by a well-known CA, you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
+ For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the backend server certificate in .cer format. If the backend certificate is issued by a well-known CA, you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
![Add trusted root certificate](./media/end-to-end-ssl-portal/trustedrootcert-portal.png)
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
For more information, see [Application Gateway Ingress Controller (AGIC)](ingres
## URL-based routing
-URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.
+URL Path Based Routing allows you to route traffic to backend server pools based on URL Paths of the request.
One of the scenarios is to route requests for different content types to different pool. For example, requests for `http://contoso.com/video/*` are routed to VideoServerPool, and `http://contoso.com/images/*` are routed to ImageServerPool. DefaultServerPool is selected if none of the path patterns match.
HTTP headers allow the client and server to pass additional information with the
- Removing response header fields that can reveal sensitive information. - Stripping port information from X-Forwarded-For headers.
-Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and back-end pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the re-evaluate path map option.
+Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and backend pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the re-evaluate path map option.
It also provides you with the capability to add conditions to ensure the specified headers or URL are rewritten only when certain conditions are met. These conditions are based on the request and response information.
For a complete list of application gateway limits, see [Application Gateway serv
The following table shows an average performance throughput for each application gateway v1 instance with SSL offload enabled:
-| Average back-end page response size | Small | Medium | Large |
+| Average backend page response size | Small | Medium | Large |
| | | | | | 6 KB |7.5 Mbps |13 Mbps |50 Mbps | | 100 KB |35 Mbps |100 Mbps |200 Mbps | > [!NOTE]
-> These values are approximate values for an application gateway throughput. The actual throughput depends on various environment details, such as average page size, location of back-end instances, and processing time to serve a page. For exact performance numbers, you should run your own tests. These values are only provided for capacity planning guidance.
+> These values are approximate values for an application gateway throughput. The actual throughput depends on various environment details, such as average page size, location of backend instances, and processing time to serve a page. For exact performance numbers, you should run your own tests. These values are only provided for capacity planning guidance.
## Version feature comparison
application-gateway High Traffic Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/high-traffic-support.md
Title: Application Gateway high traffic volume support description: This article provides guidance to configure Azure Application Gateway in support of high network traffic volume scenarios. -+ Last updated 03/24/2020-+ # Application Gateway high traffic support
You can use Application Gateway with Web Application Firewall (WAF) for a scalable and secure way to manage traffic to your web applications.
-It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. The following suggestions help you set up Application Gateway with WAF to handle extra traffic.
+It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you're prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. The following suggestions help you set up Application Gateway with WAF to handle extra traffic.
Please check the [metrics documentation](./application-gateway-metrics.md) for the complete list of metrics offered by Application Gateway. See [visualize metrics](./application-gateway-metrics.md#metrics-visualization) in the Azure portal and the [Azure monitor documentation](../azure-monitor/alerts/alerts-metric.md) on how to set alerts for metrics. ## Scaling for Application Gateway v1 SKU (Standard/WAF SKU) ### Set your instance count based on your peak CPU usage
-If you are using a v1 SKU gateway, youΓÇÖll have the ability to set your Application Gateway up to 32 instances for scaling. Check your Application GatewayΓÇÖs CPU utilization in the past one month for any spikes above 80%, it is available as a metric for you to monitor. It is recommended that you set your instance count according to your peak usage and with a 10% to 20% additional buffer to account for any traffic spikes.
+If you're using a v1 SKU gateway, youΓÇÖll have the ability to set your Application Gateway up to 32 instances for scaling. Check your Application GatewayΓÇÖs CPU utilization in the past one month for any spikes above 80%, it is available as a metric for you to monitor. It is recommended that you set your instance count according to your peak usage and with a 10% to 20% additional buffer to account for any traffic spikes.
:::image type="content" source="./media/application-gateway-covid-guidelines/v1-cpu-utilization-inline.png" alt-text="V1 CPU utilization metrics" lightbox="./media/application-gateway-covid-guidelines/v1-cpu-utilization-exp.png":::
application-gateway How Application Gateway Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-application-gateway-works.md
This article explains how an [application gateway](overview.md) accepts incoming
Azure Application Gateway can be used as an internal application load balancer or as an internet-facing application load balancer. An internet-facing application gateway uses public IP addresses. The DNS name of an internet-facing application gateway is publicly resolvable to its public IP address. As a result, internet-facing application gateways can route client requests from the internet.
-Internal application gateways use only private IP addresses. If you are using a Custom or [Private DNS zone](../dns/private-dns-overview.md), the domain name should be internally resolvable to the private IP address of the Application Gateway. Therefore, internal load-balancers can only route requests from clients with access to a virtual network for the application gateway.
+Internal application gateways use only private IP addresses. If you're using a Custom or [Private DNS zone](../dns/private-dns-overview.md), the domain name should be internally resolvable to the private IP address of the Application Gateway. Therefore, internal load-balancers can only route requests from clients with access to a virtual network for the application gateway.
## How an application gateway routes a request
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
The problem in maintaining cookie-based session affinity may happen due to the f
- ΓÇ£Cookie-based AffinityΓÇ¥ setting is not enabled - Your application cannot handle cookie-based affinity-- Application is using cookie-based affinity but requests still bouncing between back-end servers
+- Application is using cookie-based affinity but requests still bouncing between backend servers
### Check whether the "Cookie-based AffinityΓÇ¥ setting is enabled
The application gateway can only perform session-based affinity by using a cooki
If the application cannot handle cookie-based affinity, you must use an external or internal azure load balancer or another third-party solution.
-### Application is using cookie-based affinity but requests still bouncing between back-end servers
+### Application is using cookie-based affinity but requests still bouncing between backend servers
#### Symptom
-You have enabled the Cookie-based Affinity setting, when you access the Application Gateway by using a short name URL in Internet Explorer, for example: `http://website` , the request is still bouncing between back-end servers.
+You have enabled the Cookie-based Affinity setting, when you access the Application Gateway by using a short name URL in Internet Explorer, for example: `http://website` , the request is still bouncing between backend servers.
To identify this issue, follow the instructions:
Use the web debugger of your choice. In this sample we will use Fiddler to captu
- **Example A:** You find a session log that the request is sent from the client, and it goes to the public IP address of the Application Gateway, click this log to view the details. On the right side, data in the bottom box is what the Application Gateway is returning to the client. Select the ΓÇ£RAWΓÇ¥ tab and determine whether the client is receiving a "**Set-Cookie: ApplicationGatewayAffinity=** *ApplicationGatewayAffinityValue*." If there's no cookie, session affinity isn't set, or the Application Gateway isn't applying cookie back to the client. > [!NOTE]
- > This ApplicationGatewayAffinity value is the cookie-id, that the Application Gateway sets for the client to be sent to a particular back-end server.
+ > This ApplicationGatewayAffinity value is the cookie-id, that the Application Gateway sets for the client to be sent to a particular backend server.
![Screenshot shows an example of details of a log entry with the Set-Cookie value highlighted.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-17.png) -- **Example B:** The next session log followed by the previous one is the client responding back to the Application Gateway, which has set the ApplicationGatewayAffinity. If the ApplicationGatewayAffinity cookie-id matches, the packet should be sent to the same back-end server that was used previously. Check the next several lines of http communication to see whether the client's ApplicationGatewayAffinity cookie is changing.
+- **Example B:** The next session log followed by the previous one is the client responding back to the Application Gateway, which has set the ApplicationGatewayAffinity. If the ApplicationGatewayAffinity cookie-id matches, the packet should be sent to the same backend server that was used previously. Check the next several lines of http communication to see whether the client's ApplicationGatewayAffinity cookie is changing.
![Screenshot shows an example of details of a log entry with a cookie highlighted.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-18.png)
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
Azure Application Gateway shouldn't exhibit 500 response codes. Please open a su
HTTP 502 errors can have several root causes, for example: - NSG, UDR, or custom DNS is blocking access to backend pool members.-- Back-end VMs or instances of [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) aren't responding to the default health probe.
+- Backend VMs or instances of [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) aren't responding to the default health probe.
- Invalid or improper configuration of custom health probes.-- Azure Application Gateway's [back-end pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool).
+- Azure Application Gateway's [backend pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool).
- None of the VMs or instances in [virtual machine scale set are healthy](application-gateway-troubleshooting-502.md#unhealthy-instances-in-backendaddresspool). - [Request time-out or connectivity issues](application-gateway-troubleshooting-502.md#request-time-out) with user requests.
application-gateway Ingress Controller Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-annotations.md
Title: Application Gateway Ingress Controller annotations description: This article provides documentation on the annotations specific to the Application Gateway Ingress Controller. -+ Last updated 3/18/2022-+ # Annotations for Application Gateway Ingress Controller
application-gateway Ingress Controller Autoscale Pods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md
Title: Autoscale AKS pods with Azure Application Gateway metrics description: This article provides instructions on how to scale your AKS backend pods using Application Gateway metrics and Azure Kubernetes Metric Adapter -+ Last updated 11/4/2019-+ # Autoscale your AKS pods using Application Gateway Metrics (Beta)
application-gateway Ingress Controller Cookie Affinity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-cookie-affinity.md
Title: Enable cookie based affinity with Application Gateway description: This article provides information on how to enable cookie-based affinity with an Application Gateway. -+ Last updated 11/4/2019-+ # Enable Cookie based affinity with an Application Gateway
application-gateway Ingress Controller Disable Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-disable-addon.md
Title: Disable and re-enable Application Gateway Ingress Controller add-on for Azure Kubernetes Service cluster description: This article provides information on how to disable and re-enable the AGIC add-on for your AKS cluster -+ Last updated 06/10/2020-+ # Disable and re-enable AGIC add-on for your AKS cluster
application-gateway Ingress Controller Expose Websocket Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-websocket-server.md
Title: Expose a WebSocket server to Application Gateway description: This article provides information on how to expose a WebSocket server to Application Gateway with ingress controller for AKS clusters. -+ Last updated 11/4/2019-+ # Expose a WebSocket server to Application Gateway
curl -i -N -H "Connection: Upgrade" \
## WebSocket Health Probes
-If your deployment does not explicitly define health probes, Application Gateway would attempt an HTTP GET on your WebSocket server endpoint.
+If your deployment doesn't explicitly define health probes, Application Gateway would attempt an HTTP GET on your WebSocket server endpoint.
Depending on the server implementation ([here is one we love](https://github.com/gorilla/websocket/blob/master/examples/chat/main.go)) WebSocket specific headers may be required (`Sec-Websocket-Version` for instance).
-Since Application Gateway does not add WebSocket headers, the Application Gateway's health probe response from your WebSocket server will most likely be `400 Bad Request`.
+Since Application Gateway doesn't add WebSocket headers, the Application Gateway's health probe response from your WebSocket server will most likely be `400 Bad Request`.
As a result Application Gateway will mark your pods as unhealthy, which will eventually result in a `502 Bad Gateway` for the consumers of the WebSocket server. To avoid this you may need to add an HTTP GET handler for a health check to your server (`/health` for instance, which returns `200 OK`).
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Title: Create an ingress controller with an existing Application Gateway description: This article provides information on how to deploy an Application Gateway Ingress Controller with an existing Application Gateway. -+ Last updated 11/4/2019-+ # Install an Application Gateway Ingress Controller (AGIC) using an existing Application Gateway
kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml
``` The object `prohibit-all-targets`, as the name implies, prohibits AGIC from changing config for *any* host and path.
-Helm install with `appgw.shared=true` will deploy AGIC, but will not make any changes to Application Gateway.
+Helm install with `appgw.shared=true` will deploy AGIC, but won't make any changes to Application Gateway.
### Broaden permissions
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
# certificates, and issues related to your account. email: <YOUR.EMAIL@ADDRESS> # ACME server URL for LetΓÇÖs EncryptΓÇÖs staging environment.
- # The staging environment will not issue trusted certificates but is
+ # The staging environment won't issue trusted certificates but is
# used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory
application-gateway Ingress Controller Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-migration.md
Title: How to migrate from Azure Application Gateway Ingress Controller Helm to AGIC add-on description: This article provides instructions on how to migrate from AGIC deployed through Helm to AGIC deployed as an AKS add-on -+ Last updated 03/02/2021-+ # Migrate from AGIC Helm to AGIC add-on
appgwId=$(az network application-gateway show -n myApplicationGateway -g myResou
``` ## Delete AGIC Helm from your AKS cluster
-Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway will continue to have the last configuration applied by AGIC so existing routing rules will not be affected.
+Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway will continue to have the last configuration applied by AGIC so existing routing rules won't be affected.
## Enable AGIC add-on using your existing Application Gateway You can now enable the AGIC add-on in your AKS cluster to target your existing Application Gateway through Azure CLI or Portal. Run the following Azure CLI command to enable the AGIC add-on in your AKS cluster. The example enables the add-on in a cluster called *myCluster*, in a resource group called *myResourceGroup*, using the Application Gateway resource ID *appgwId* we saved above in the earlier step.
application-gateway Ingress Controller Multiple Namespace Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-multiple-namespace-support.md
Title: Enable multiple namespace supports for Application Gateway Ingress Controller description: This article provides information on how to enable multiple namespace support in a Kubernetes cluster with an Application Gateway Ingress Controller. -+ Last updated 11/4/2019-+ # Enable multiple Namespace support in an AKS cluster with Application Gateway Ingress Controller
application-gateway Ingress Controller Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-overview.md
Title: What is Azure Application Gateway Ingress Controller? description: This article provides an introduction to what Application Gateway Ingress Controller is. -+ Last updated 03/02/2021-+ # What is Application Gateway Ingress Controller?
The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, w
The Ingress Controller runs in its own pod on the customerΓÇÖs AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md). ## Benefits of Application Gateway Ingress Controller
-AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and does not require NodePort or KubeProxy services. This also brings better performance to your deployments.
+AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and doesn't require NodePort or KubeProxy services. This also brings better performance to your deployments.
Ingress Controller is supported exclusively by Standard_v2 and WAF_v2 SKUs, which also brings you autoscaling benefits. Application Gateway can react in response to an increase or decrease in traffic load and scale accordingly, without consuming any resources from your AKS cluster.
application-gateway Ingress Controller Private Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md
Title: Use private IP address for internal routing for an ingress endpoint description: This article provides information on how to use private IPs for internal routing and thus exposing the Ingress endpoint within a cluster to the rest of the VNet. -+ Last updated 11/4/2019-+ # Use private IP for internal routing for an Ingress endpoint
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
When you're using a restricted Key Vault, use the following steps to configure A
> [!Note] > If you deploy the Application Gateway instance via an ARM template by using either the Azure CLI or PowerShell, or via an Azure application deployed from the Azure portal, the SSL certificate is stored in the Key Vault as a Base64-encoded PFX file. You must complete the steps in [Use Azure Key Vault to pass secure parameter value during deployment](../azure-resource-manager/templates/key-vault-parameter.md). >
-> It's particularly important to set `enabledForTemplateDeployment` to `true`. The certificate might or might not have a password. In the case of a certificate with a password, the following example shows a possible configuration for the `sslCertificates` entry in `properties` for the ARM template configuration for Application Gateway.
+> It's particularly important to set `enabledForTemplateDeployment` to `true`. The certificate might or might not have a password. For a certificate with a password, the following example shows a possible configuration for the `sslCertificates` entry in `properties` for the ARM template configuration for Application Gateway.
> > ``` > "sslCertificates": [
application-gateway Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/log-analytics.md
Once your Application Gateway WAF is operational, you can enable logs to inspect
## Import WAF logs
-To import your firewall logs into Log Analytics, see [Back-end health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md#diagnostic-logging). When you have the firewall logs in your Log Analytics workspace, you can view data, write queries, create visualizations, and add them to your portal dashboard.
+To import your firewall logs into Log Analytics, see [Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md#diagnostic-logging). When you have the firewall logs in your Log Analytics workspace, you can view data, write queries, create visualizations, and add them to your portal dashboard.
## Explore data with examples
Once you create a query, you can add it to your dashboard. Select the **Pin to
## Next steps
-[Back-end health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md)
+[Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md)
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
No. The script doesn't replicate this configuration for v2. You must add the lo
### Does this script support certificates uploaded to Azure KeyVault ?
-No. Currently the script does not support certificates in KeyVault. However, this is being considered for a future version.
+No. Currently the script doesn't support certificates in KeyVault. However, this is being considered for a future version.
### I ran into some issues with using this script. How can I get help?
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md
For reference, see a list of [all resource logs category types supported in Azur
> [!NOTE] > The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.
-For more information, see [Back-end health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log)
+For more information, see [Backend health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log)
<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article. -->
Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-mon
|:|:-|| | **Activitylog** | Activity log | Activity log entries are collected by default. You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. | |**ApplicationGatewayAccessLog**|Access log| You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP address, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.|
-| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy back-end instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.|
+| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.|
|**ApplicationGatewayFirewallLog**|Firewall log|You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.|
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md
Keep the headings in this order.
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact robb@microsoft.com -->
Azure Monitor alerts proactively notify you when important conditions are found
<!-- only include next line if applications run on your service and work with App Insights. -->
-If you are creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
+If you're creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
<!-- end --> The following tables list common and recommended alert rules for Application Gateway.
application-gateway Mutual Authentication Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-certificate-management.md
Title: Export trusted client CA certificate chain for client authentication
description: Learn how to export a trusted client CA certificate chain for client authentication on Azure Application Gateway -+ Last updated 03/31/2021-+ # Export a trusted client CA certificate chain to use with client authentication
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
Title: Overview of mutual authentication on Azure Application Gateway description: This article is an overview of mutual authentication on Application Gateway. -+ Last updated 03/30/2021 -+ # Overview of mutual authentication with Application Gateway
application-gateway Mutual Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-portal.md
Title: Configure mutual authentication on Azure Application Gateway through portal description: Learn how to configure an Application Gateway to have mutual authentication through portal -+ Last updated 02/18/2022-+ # Configure mutual authentication with Application Gateway through portal
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-powershell.md
Title: Configure mutual authentication on Azure Application Gateway through PowerShell description: Learn how to configure an Application Gateway to have mutual authentication through PowerShell -+ Last updated 02/18/2022-+
application-gateway Mutual Authentication Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-troubleshooting.md
Title: Troubleshoot mutual authentication on Azure Application Gateway description: Learn how to troubleshoot mutual authentication on Application Gateway -+ Last updated 02/18/2022-+ # Troubleshooting mutual authentication errors in Application Gateway
There is certificate data that is missing. The certificate uploaded could have b
#### Solution
-Validate that the certificate file uploaded does not have any missing data.
+Validate that the certificate file uploaded doesn't have any missing data.
### Error code: ApplicationGatewayTrustedClientCertificateMustNotHavePrivateKey
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table compares the features available with each SKU.
| Proxy NTLM authentication | &#x2713; | | > [!NOTE]
-> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its back-end pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
+> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its backend pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
## Differences from v1 SKU
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
This type of routing is known as application layer (OSI layer 7) load balancing.
>[!NOTE] > Azure provides a suite of fully managed load-balancing solutions for your scenarios.
-> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
+> * If you're looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
> * If you need to optimize global routing of your web traffic and optimize top-tier end-user performance and reliability through quick global failover, see [Front Door](../frontdoor/front-door-overview.md). > * To do transport layer load balancing, review [Load Balancer](../load-balancer/load-balancer-overview.md). >
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
1. Select **Create**. > [!Note]
-> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_.
+> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener won't be shown as a _Target sub-resource_.
> [!Note]
-> If you are provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
+> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
# [Azure PowerShell](#tab/powershell)
application-gateway Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link.md
Four components are required to implement Private Link with Application Gateway:
- API version 2020-03-01 or later should be used to configure Private Link configurations. - Static IP allocation method in the Private Link Configuration object isn't supported. - The subnet used for PrivateLinkConfiguration cannot be same as the Application Gateway subnet.-- Private link configuration for Application Gateway does not expose the "Alias" property and must be referenced via resource URI.-- Private Endpoint creation does not create a \*.privatelink DNS record/zone. All DNS records should be entered in existing zones used for your Application Gateway.
+- Private link configuration for Application Gateway doesn't expose the "Alias" property and must be referenced via resource URI.
+- Private Endpoint creation doesn't create a \*.privatelink DNS record/zone. All DNS records should be entered in existing zones used for your Application Gateway.
- Azure Front Door and Application Gateway do not support chaining via Private Link. - Source IP address and x-forwarded-for headers will contain the Private link IP addresses
application-gateway Proxy Buffers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/proxy-buffers.md
Title: Configure Request and Response Buffers description: Learn how to configure Request and Response buffers for your Azure Application Gateway. -+ Last updated 08/03/2022-+ #Customer intent: As a user, I want to know how can I disable/enable proxy buffers.
application-gateway Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-bicep.md
Title: 'Quickstart: Direct web traffic using Bicep'
description: In this quickstart, you learn how to use Bicep to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. --++ Last updated 04/14/2022
In this quickstart, you use Bicep to create an Azure Application Gateway. Then y
## Review the Bicep file
-This Bicep file creates a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+This Bicep file creates a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/)
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
:::image type="content" source="media/quick-create-portal/application-gateway-qs-resources.png" alt-text="application gateway resources":::
az network public-ip create \
## Create the backend servers
-A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
+A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
#### Create two virtual machines
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
Title: 'Quickstart: Direct web traffic using the portal'
description: In this quickstart, you learn how to use the Azure portal to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. --++ Last updated 10/13/2022
# Quickstart: Direct web traffic with Azure Application Gateway - Azure portal
-In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
+In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
![Quickstart setup](./media/quick-create-portal/application-gateway-qs-resources.png)
You'll create the application gateway using the tabs on the **Create application
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
In this quickstart, you use Azure PowerShell to create an application gateway. Then you test it to make sure it works correctly.
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
:::image type="content" source="media/quick-create-portal/application-gateway-qs-resources.png" alt-text="application gateway resources":::
New-AzApplicationGateway `
### Backend servers
-Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service.
+Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service.
In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to verify that Azure successfully created the application gateway.
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Review the template
-For the sake of simplicity, this template creates a simple setup with a public front-end IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+For the sake of simplicity, this template creates a simple setup with a public frontend IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/)
application-gateway Redirect External Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Redirect Internal Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Rewrite Http Headers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-portal.md
Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account
## Configure header rewrite
-In this example, we'll modify a redirection URL by rewriting the location header in the HTTP response sent by a back-end application.
+In this example, we'll modify a redirection URL by rewriting the location header in the HTTP response sent by a backend application.
1. Select **All resources**, and then select your application gateway.
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
Application Gateway allows you to rewrite selected content of requests and respo
HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers.
-Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and back-end pools.
+Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend pools.
To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-url-portal.md).
To capture a substring for later use, put parentheses around the subpattern that
* (\d)+ # Match a digit one or more times, capturing the last into group 1 > [!Note]
-> Use of */* to prefix and suffix the pattern should not be specified in the pattern to match value. For example, (\d)(\d) will match two digits. /(\d)(\d)/ will not match two digits.
+> Use of */* to prefix and suffix the pattern should not be specified in the pattern to match value. For example, (\d)(\d) will match two digits. /(\d)(\d)/ won't match two digits.
Once captured, you can reference them in the action set using the following format:
Once captured, you can reference them in the action set using the following form
* For a server variable, you must use {var_serverVariableName_groupNumber}. For example, {var_uri_path_1} or {var_uri_path_2} > [!Note]
-> The case of the condition variable needs to match case of the capture variable. For example, if my condition variable is User-Agent, my capture variable must be in the case of User-Agent (i.e. {http_req_User-Agent_2}). If my condition variable is defined as user-agent, my capture variable must be in the case of user-agent (i.e. {http_req_user-agent_2}).
+> The case of the condition variable needs to match case of the capture variable. For example, if my condition variable is User-Agent, my capture variable must be for User-Agent (i.e. {http_req_User-Agent_2}). If my condition variable is defined as user-agent, my capture variable must be for user-agent (i.e. {http_req_user-agent_2}).
If you want to use the whole value, you should not mention the number. Simply use the format {http_req_headerName}, etc. without the groupNumber.
A rewrite rule set contains:
* Enabling 'Re-evaluate path map' isn't allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
-* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which does not have 'Re-evaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
+* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which doesn't have 'Re-evaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway will continue to serve other requests without any degradation in such a scenario.
Here, with only header rewrite configured, the WAF evaluation will be done on `"
#### Remove port information from the X-Forwarded-For header
-Application Gateway inserts an X-Forwarded-For header into all requests before it forwards the requests to the backend. This header is a comma-separated list of IP ports. There might be scenarios in which the back-end servers only need the headers to contain IP addresses. You can use header rewrite to remove the port information from the X-Forwarded-For header. One way to do this is to set the header to the add_x_forwarded_for_proxy server variable. Alternatively, you can also use the variable client_ip:
+Application Gateway inserts an X-Forwarded-For header into all requests before it forwards the requests to the backend. This header is a comma-separated list of IP ports. There might be scenarios in which the backend servers only need the headers to contain IP addresses. You can use header rewrite to remove the port information from the X-Forwarded-For header. One way to do this is to set the header to the add_x_forwarded_for_proxy server variable. Alternatively, you can also use the variable client_ip:
![Remove port](./media/rewrite-http-headers-url/remove-port.png)
Modification of a redirect URL can be useful under certain circumstances. For e
> [!WARNING] > The need to modify a redirection URL sometimes comes up in the context of a configuration whereby Application Gateway is configured to override the hostname towards the backend. The hostname as seen by the backend is in that case different from the hostname as seen by the browser. In this situation, the redirect would not use the correct hostname. This configuration isn't recommended. >
-> The limitations and implications of such a configuration are described in [Preserve the original HTTP host name between a reverse proxy and its back-end web application](/azure/architecture/best-practices/host-name-preservation). The recommended setup for App Service is to follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](configure-web-app.md). Rewriting the location header on the response as described in the below example should be considered a workaround and does not address the root cause.
+> The limitations and implications of such a configuration are described in [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation). The recommended setup for App Service is to follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](configure-web-app.md). Rewriting the location header on the response as described in the below example should be considered a workaround and doesn't address the root cause.
When the app service sends a redirection response, it uses the same hostname in the location header of its response as the one in the request it receives from the application gateway. So the client will make the request directly to `contoso.azurewebsites.net/path2` instead of going through the application gateway (`contoso.com/path2`). Bypassing the application gateway isn't desirable.
You can fix several security vulnerabilities by implementing necessary headers i
### Delete unwanted headers
-You might want to remove headers that reveal sensitive information from an HTTP response. For example, you might want to remove information like the back-end server name, operating system, or library details. You can use the application gateway to remove these headers:
+You might want to remove headers that reveal sensitive information from an HTTP response. For example, you might want to remove information like the backend server name, operating system, or library details. You can use the application gateway to remove these headers:
![Deleting header](./media/rewrite-http-headers-url/remove-headers.png)
To accomplish scenarios where you want to choose the backend pool based on the v
:::image type="content" source="./media/rewrite-http-headers-url/url-scenario1-3.png" alt-text="URL rewrite scenario 1-3.":::
-Now, if the user requests *contoso.com/listing?category=any*, then it will be matched with the default path since none of the path patterns in the path map (/listing1, /listing2, /listing3) will match. Since you associated the above rewrite set with this path, this rewrite set will be evaluated. As the query string will not match the condition in any of the 3 rewrite rules in this rewrite set, no rewrite action will take place and therefore, the request will be routed unchanged to the backend associated with the default path (which is *GenericList*).
+Now, if the user requests *contoso.com/listing?category=any*, then it will be matched with the default path since none of the path patterns in the path map (/listing1, /listing2, /listing3) will match. Since you associated the above rewrite set with this path, this rewrite set will be evaluated. As the query string won't match the condition in any of the 3 rewrite rules in this rewrite set, no rewrite action will take place and therefore, the request will be routed unchanged to the backend associated with the default path (which is *GenericList*).
If the user requests *contoso.com/listing?category=shoes*, then again the default path will be matched. However, in this case the condition in the first rule will match and therefore, the action associated with the condition will be executed which will rewrite the URL path to /*listing1* and reevaluate the path-map. When the path-map is reevaluated, the request will now match the path associated with pattern */listing1* and the request will be routed to the backend associated with this pattern, which is ShoesListBackendPool.
For a step-by-step guide to achieve the scenario described above, see [Rewrite U
### URL rewrite vs URL redirect
-In the case of a URL rewrite, Application Gateway rewrites the URL before the request is sent to the backend. This will not change what users see in the browser because the changes are hidden from the user.
+For a URL rewrite, Application Gateway rewrites the URL before the request is sent to the backend. This won't change what users see in the browser because the changes are hidden from the user.
-In the case of a URL redirect, Application Gateway sends a redirect response to the client with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect. The URL that the user sees in the browser will update to the new URL.
+For a URL redirect, Application Gateway sends a redirect response to the client with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect. The URL that the user sees in the browser will update to the new URL.
:::image type="content" source="./media/rewrite-http-headers-url/url-rewrite-vs-redirect.png" alt-text="Rewrite vs Redirect."::: ## Limitations -- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you are using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.
+- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you're using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.
- Rewrites aren't supported when the application gateway is configured to redirect the requests or to show a custom error page. - Request header names can contain alphanumeric characters and hyphens. Headers names containing other characters will be discarded when a request is sent to the backend target. - Response header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27), with the exception of underscores (\_).
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
To configure TLS termination, a TLS/SSL certificate must be added to the listene
> [!NOTE]
-> Application gateway does not provide any capability to create a new certificate or send a certificate request to a certification authority.
+> Application gateway doesn't provide any capability to create a new certificate or send a certificate request to a certification authority.
For the TLS connection to work, you need to ensure that the TLS/SSL certificate meets the following conditions:
Application gateway supports the following types of certificates:
- CA (Certificate Authority) certificate: A CA certificate is a digital certificate issued by a certificate authority (CA) - EV (Extended Validation) certificate: An EV certificate is a certificate that conforms to industry standard certificate guidelines. This will turn the browser locator bar green and publish the company name as well.-- Wildcard Certificate: This certificate supports any number of subdomains based on *.site.com, where your subdomain would replace the *. It doesnΓÇÖt, however, support site.com, so in case the users are accessing your website without typing the leading "www", the wildcard certificate will not cover that.
+- Wildcard Certificate: This certificate supports any number of subdomains based on *.site.com, where your subdomain would replace the *. It doesnΓÇÖt, however, support site.com, so in case the users are accessing your website without typing the leading "www", the wildcard certificate won't cover that.
- Self-Signed certificates: Client browsers don't trust these certificates and will warn the user that the virtual serviceΓÇÖs certificate isn't part of a trust chain. Self-signed certificates are good for testing or environments where administrators control the clients and can safely bypass the browserΓÇÖs security alerts. Production workloads should never use self-signed certificates. For more information, see [configure TLS termination with application gateway](./create-ssl-portal.md).
The [TLS policy](./application-gateway-ssl-policy-overview.md) applies only to t
Application Gateway only communicates with those backend servers that have either allow-listed their certificate with the Application Gateway or whose certificates are signed by well-known CA authorities and the certificate's CN matches the host name in the HTTP backend settings. These include the trusted Azure services such as Azure App Service/Web Apps and Azure API Management.
-If the certificates of the members in the backend pool aren't signed by well-known CA authorities, then each instance in the backend pool with end to end TLS enabled must be configured with a certificate to allow secure communication. Adding the certificate ensures that the application gateway only communicates with known back-end instances. This further secures the end-to-end communication.
+If the certificates of the members in the backend pool aren't signed by well-known CA authorities, then each instance in the backend pool with end to end TLS enabled must be configured with a certificate to allow secure communication. Adding the certificate ensures that the application gateway only communicates with known backend instances. This further secures the end-to-end communication.
> [!NOTE] >
The following tables outline the differences in SNI between the v1 and v2 SKU in
| If the backend pool address is an IP address (v1) or if custom probe hostname is configured as IP address (v2) | SNI (server_name) wonΓÇÖt be set. <br> **Note:** In this case, the backend server should be able to return a default/fallback certificate and this should be allow-listed in HTTP settings under authentication certificate. If thereΓÇÖs no default/fallback certificate configured in the backend server and SNI is expected, the server might reset the connection and will lead to probe failures | In the order of precedence mentioned previously, if they have IP address as hostname, then SNI won't be set as per [RFC 6066](https://tools.ietf.org/html/rfc6066). <br> **Note:** SNI also won't be set in v2 probes if no custom probe is configured and no hostname is set on HTTP settings or backend pool | > [!NOTE]
-> If a custom probe isn't configured, then Application Gateway sends a default probe in this format - \<protocol\>://127.0.0.1:\<port\>/. For example, for a default HTTPS probe, it will be sent as https://127.0.0.1:443/. Note that, the 127.0.0.1 mentioned here is only used as HTTP host header and as per RFC 6066, will not be used as SNI header. For more information on health probe errors, check the [backend health troubleshooting guide](application-gateway-backend-health-troubleshooting.md).
+> If a custom probe isn't configured, then Application Gateway sends a default probe in this format - \<protocol\>://127.0.0.1:\<port\>/. For example, for a default HTTPS probe, it will be sent as https://127.0.0.1:443/. Note that, the 127.0.0.1 mentioned here is only used as HTTP host header and as per RFC 6066, won't be used as SNI header. For more information on health probe errors, check the [backend health troubleshooting guide](application-gateway-backend-health-troubleshooting.md).
#### For live traffic
application-gateway Troubleshoot App Service Redirection App Service Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/troubleshoot-app-service-redirection-app-service-url.md
Title: Troubleshoot redirection to App Service URL
description: This article provides information on how to troubleshoot the redirection issue when Azure Application Gateway is used with Azure App Service -+ Last updated 04/15/2021-+ # Troubleshoot App Service issues in Application Gateway
-Learn how to diagnose and resolve issues you might encounter when Azure App Service is used as a back-end target with Azure Application Gateway.
+Learn how to diagnose and resolve issues you might encounter when Azure App Service is used as a backend target with Azure Application Gateway.
## Overview
application-gateway Tutorial Autoscale Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-autoscale-ps.md
New-AzWebApp -ResourceGroupName $rg -Name <site2-name> -Location $location -AppS
## Configure the infrastructure
-Configure the IP config, front-end IP config, back-end pool, HTTP settings, certificate, port, listener, and rule in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
+Configure the IP config, frontend IP config, backend pool, HTTP settings, certificate, port, listener, and rule in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
Replace your two web app FQDNs (for example: `mywebapp.azurewebsites.net`) in the $pool variable definition.
application-gateway Tutorial Http Header Rewrite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-http-header-rewrite-powershell.md
$gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name "AppGwSubnet" -VirtualNetwork
## Configure the infrastructure
-Configure the IP config, front-end IP config, back-end pool, HTTP settings, certificate, port, and listener in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
+Configure the IP config, frontend IP config, backend pool, HTTP settings, certificate, port, and listener in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
```azurepowershell $ipconfig = New-AzApplicationGatewayIPConfiguration -Name "IPConfig" -Subnet $gwSubnet
$rule01 = New-AzApplicationGatewayRequestRoutingRule -Name "Rule1" -RuleType bas
Now you can specify the autoscale configuration for the application gateway. Two autoscaling configuration types are supported:
-* **Fixed capacity mode**. In this mode, the application gateway does not autoscale and operates at a fixed Scale Unit capacity.
+* **Fixed capacity mode**. In this mode, the application gateway doesn't autoscale and operates at a fixed Scale Unit capacity.
```azurepowershell $sku = New-AzApplicationGatewaySku -Name Standard_v2 -Tier Standard_v2 -Capacity 2
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
Title: 'Tutorial: Enable ingress controller add-on for existing AKS cluster with existing Azure application gateway' description: Use this tutorial to enable the Ingress Controller Add-On for your existing AKS cluster with an existing Application Gateway -+ Last updated 07/15/2022-+
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Title: 'Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Azure application gateway' description: Use this tutorial to learn how to enable the Ingress Controller add-on for your new AKS cluster with a new application gateway instance. -+ Last updated 07/15/2022-+
application-gateway Tutorial Manage Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Tutorial Url Redirect Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/url-route-overview.md
# URL Path Based Routing overview
-URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.
+URL Path Based Routing allows you to route traffic to backend server pools based on URL Paths of the request.
One of the scenarios is to route requests for different content types to different backend server pools.
-In the following example, Application Gateway is serving traffic for contoso.com from three back-end server pools for example: VideoServerPool, ImageServerPool, and DefaultServerPool.
+In the following example, Application Gateway is serving traffic for contoso.com from three backend server pools for example: VideoServerPool, ImageServerPool, and DefaultServerPool.
![imageURLroute](./media/application-gateway-url-route-overview/figure1.png)
Requests for http\://contoso.com/video/* are routed to VideoServerPool, and http
## UrlPathMap configuration element
-The urlPathMap element is used to specify Path patterns to back-end server pool mappings. The following code example is the snippet of urlPathMap element from template file.
+The urlPathMap element is used to specify Path patterns to backend server pool mappings. The following code example is the snippet of urlPathMap element from template file.
```json "urlPathMaps": [{
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
Title: Azure regions and availability zones description: Learn about regions and availability zones and how they work to help you achieve true resiliency. -++ Last updated 08/23/2022
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
Title: Azure services that support availability zones description: Learn what services are supported by availability zones and understand resiliency across all Azure services. -++ Previously updated : 08/23/2022 Last updated : 10/20/2022
In the Product Catalog, always-available services are listed as "non-regional" s
| | | | [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure NetApp Files](../azure-netapp-files/use-availability-zones.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
availability-zones Business Continuity Management Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/business-continuity-management-program.md
Title: Business continuity management program in Azure description: Learn about one of the most mature business continuity management programs in the industry. -++ Last updated 10/21/2021
availability-zones Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/cross-region-replication-azure.md
Title: Cross-region replication in Azure description: Learn about Cross-region replication in Azure. -++ Last updated 3/01/2022
availability-zones Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/glossary.md
Title: Azure resiliency terminology description: Understanding terms -++ Last updated 10/01/2021
availability-zones Region Types Service Categories Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/region-types-service-categories-azure.md
Title: Azure services description: Learn about Region types and service categories in Azure. -++ Last updated 12/10/2021
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Azure Arc-enabled Kubernetes follows the standard [semantic versioning scheme](h
While the schedule may vary, a new minor version of Azure Arc-enabled Kubernetes agents is released approximately once per month.
-The following command upgrades the agent to version 1.1.0:
+The following command upgrades the agent to version 1.8.14:
```azurecli
-az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.1.0
+az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.8.14
``` ## Check agent version
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
-# Integrate Azure Active Directory with Azure Arc-enabled Kubernetes clusters
+# Use Azure RBAC for Azure Arc-enabled Kubernetes clusters
Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. By using this feature, you can use Azure Active Directory (Azure AD) and role assignments in Azure to control authorization checks on the cluster. This implies that you can now use Azure role assignments to granularly control who can read, write, and delete Kubernetes objects like deployment, pod, and service.
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
## Prerequisites -- [Install or upgrade the Azure CLI](/cli/azure/install-azure-cli) to version 2.16.0 or later.
+- [Install or upgrade the Azure CLI](/cli/azure/install-azure-cli) to the latest version.
-- Install the `connectedk8s` Azure CLI extension, version 1.1.0 or later:
+- Install the latest version of `connectedk8s` Azure CLI extension:
```azurecli az extension add --name connectedk8s
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
- Connect an existing Azure Arc-enabled Kubernetes cluster: - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.1.0 or later.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
> [!NOTE] > You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. This feature isn't supported on AKS on Azure Stack HCI.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to version >= 2.16.0.
+- [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to the latest version.
-- Install the `connectedk8s` Azure CLI extension of version >= 1.2.5:
+- Install the latest version of the `connectedk8s` Azure CLI extension:
```azurecli az extension add --name connectedk8s
A conceptual overview of this feature is available in [Cluster connect - Azure A
- An existing Azure Arc-enabled Kubernetes connected cluster. - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
A conceptual overview of this feature is available in [Cluster connect - Azure A
- An existing Azure Arc-enabled Kubernetes connected cluster. - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
In this article, you learn how to:
## Prerequisites -- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
+- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
-- Install the following Azure CLI extensions:
- - `connectedk8s` (version 1.2.0 or later)
- - `k8s-extension` (version 1.0.0 or later)
- - `customlocation` (version 0.1.3 or later)
+- Install the latest versions of the following Azure CLI extensions:
+ - `connectedk8s`
+ - `k8s-extension`
+ - `customlocation`
```azurecli az extension add --name connectedk8s
In this article, you learn how to:
Once registered, the `RegistrationState` state will have the `Registered` value. - Verify you have an existing [Azure Arc-enabled Kubernetes connected cluster](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.5.3 or later.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
## Enable custom locations on your cluster
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
A conceptual overview of this feature is available in [Cluster extensions - Azur
## Prerequisites
-* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
-* `connectedk8s` (version >= 1.2.0) and `k8s-extension` (version >= 1.0.0) Azure CLI extensions. Install the latest version of these Azure CLI extensions by running the following commands:
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
+* Install the latest version of the `connectedk8s` and `k8s-extension` Azure CLI extensions by running the following commands:
```azurecli az extension add --name connectedk8s
A conceptual overview of this feature is available in [Cluster extensions - Azur
* An existing Azure Arc-enabled Kubernetes connected cluster. * If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
## Currently available extensions
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
> * The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`). > * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) can be used for this identity. This role is useful for at-scale onboarding, as it has only the granular permissions required to connect clusters to Azure Arc, and doesn't have permission to update, delete, or modify any other clusters or other Azure resources.
-* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
-* Install the **connectedk8s** Azure CLI extension of version >= 1.2.0:
+* Install the latest version of **connectedk8s** Azure CLI extension:
```azurecli az extension add --name connectedk8s
If your cluster is behind an outbound proxy server, requests must be routed via
For outbound proxy servers where only a trusted certificate needs to be provided without the proxy server endpoint inputs, `az connectedk8s connect` can be run with just the `--proxy-cert` input specified. In case multiple trusted certificates are expected, the combined certificate chain can be provided in a single file using the `--proxy-cert` parameter.
+> [!NOTE]
+>
+> * `--custom-ca-cert` is an alias for `--proxy-cert`. Either parameters can be used interchangeably. Passing both parameters in the same command will honour the one passed last.
+ ### [Azure CLI](#tab/azure-cli) Run the connect command with the `--proxy-cert` parameter specified:
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 | | VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) |
-| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) |
-| Nutanix | [Karbon](https://www.nutanix.com/products/karbon) | Version 2.2.1 |
+| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 |
+| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 |
| Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 |
-| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 |
+| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 | The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
The following firewall URL exceptions are needed for the Azure Arc resource brid
| Azure Arc Identity service | 443 | https://*.his.arc.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources | | Azure Arc configuration service | 443 | https://*.dp.kubernetesconfiguration.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. | | Cluster connect service | 443 | https://*.servicebus.windows.net | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-| Guest Notification service | 443 | https://guestnotificationservice.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
+| Guest Notification service | 443 | `https://guestnotificationservice.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
| SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. | | Resource bridge (appliance) Dataplane service | 443 | https://*.dp.prod.appliances.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
-| Resource bridge (appliance) container image download | 443 | *.blob.core.windows.net, https://ecpacr.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| Resource bridge (appliance) container image download | 443 | *.blob.core.windows.net, `https://ecpacr.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
| Resource bridge (appliance) image download | 80 | *.dl.delivery.mp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
-| Azure Arc for K8s container image download | 443 | https://azurearcfork8sdev.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| Azure Arc for K8s container image download | 443 | `https://azurearcfork8sdev.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
| ADHS telemetry service | 443 | adhs.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. | | Microsoft events data service | 443 | v20.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. | | vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
For a more complete example of using custom middleware in your function app, see
A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-Cancellation tokens are supported in .NET functions when running in an isolated process. The following example shows how to use a cancellation token in a function:
+Cancellation tokens are supported in .NET functions when running in an isolated process. The following example raises an exception when a cancellation request has been received:
+
+The following example performs clean-up actions if a cancellation request has been received:
+ ## ReadyToRun
Because your isolated process app runs outside the Functions runtime, you need t
[ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger [ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.ilogger-1 [GetLogger]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger?view=azure-dotnet&preserve-view=true
-[BlobClient]: /dotnet/api/azure.storage.blobs.blobclient?view=azure-dotnet
+[BlobClient]: /dotnet/api/azure.storage.blobs.blobclient?view=azure-dotnet&preserve-view=true
[DocumentClient]: /dotnet/api/microsoft.azure.documents.client.documentclient [BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage [HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
The Event Grid output binding is only available for Functions 2.x and higher. Ev
## Next steps
-* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-sdk-for-net/issues)
* [Event Grid trigger][trigger] * [Event Grid output binding][binding] * [Run a function when an Event Grid event is dispatched](./functions-bindings-event-grid-trigger.md)
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
[tileset]: /rest/api/maps/v20220901preview/tileset [style-picker-control]: choose-map-style.md#add-the-style-picker-control [style-how-to]: how-to-create-custom-styles.md
-[map-config-api]: /rest/api/maps/v20220901preview/mapconfiguration
+[map-config-api]: /rest/api/maps/v20220901preview/map-configuration
[instantiate-indoor-manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [style editor]: https://azure.github.io/Azure-Maps-Style-Editor
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Learn more about how to add more data to your map:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
-[mapConfiguration]: /rest/api/maps/v20220901preview/mapconfiguration
+[mapConfiguration]: /rest/api/maps/v20220901preview/map-configuration
[tutorial]: tutorial-creator-indoor-maps.md [geos]: geographic-scope.md [visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor/
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
This article walks you through enabling Application Insights monitoring using th
Auto-instrumentation is easy to enable with no advanced configuration required.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ > [!NOTE] > Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
You can apply additional configurations, and then based on your specific scenari
You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ 1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**. :::image type="content"source="./media/azure-web-apps/enable.png" alt-text="Screenshot of Application Insights tab with enable selected.":::
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
## Enable auto-instrumentation monitoring
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ # [Windows](#tab/Windows) > [!IMPORTANT]
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET based web applications running on [Azure App
## Enable auto-instrumentation monitoring
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ > [!NOTE] > The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-isnt-supported).
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Turning on application monitoring in Azure portal will automatically instrument
### Auto-instrumentation through Azure portal
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ You can turn on monitoring for your Node.js apps running in Azure App Service just with one click, no code change required. Application Insights for Node.js is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA.
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable monitoring for applications hosted on App Service:
This method is the easiest to enable, and no code change or advanced configurations are required. It's often referred to as "runtime" monitoring. For App Service, we recommend that at a minimum you enable this level of monitoring. Based on your specific scenario, you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+ For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ The following platforms are supported for auto-instrumentation monitoring: - [.NET Core](./azure-web-apps-net-core.md)
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Monitor your apps without code changes - auto-instrumentation for Azure Monitor Application Insights | Microsoft Docs description: Overview of auto-instrumentation for Azure Monitor Application Insights - codeless application performance management Previously updated : 08/31/2021 Last updated : 10/19/2022
-# What is auto-instrumentation for Azure Monitor application insights?
+# What is auto-instrumentation for Azure Monitor Application Insights?
-Auto-instrumentation allows you to enable application monitoring with Application Insights without changing your code.
-
-Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry automatically. In no time, you'll see the metrics, requests, and dependencies in your Application Insights resource. This telemetry will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
-
-> [!NOTE]
-> Auto-instrumentation used to be known as "codeless attach" before October 2021.
+Auto-instrumentation collects [Application Insights](app-insights-overview.md) [telemetry](data-model.md).
+> [!div class="checklist"]
+> - No code changes required
+> - [SDK update](sdk-support-guidance.md) overhead is eliminated
+> - Recommended when available
## Supported environments, languages, and resource providers
-As we're adding new integrations, the auto-instrumentation capability matrix becomes complex. The table below shows you the current state of the matter as far as support for various resource providers, languages, and environments go.
-
-|Environment/Resource Provider | .NET | .NET Core | Java | Node.js | Python |
-||--|--|--|--|--|
-|Azure App Service on Windows - Publish as Code | GA, OnBD* | GA | GA | GA, OnBD* | Not supported |
-|Azure App Service on Windows - Publish as Docker | Public Preview | Public Preview | Public Preview | Not supported | Not supported |
-|Azure App Service on Linux | N/A | Public Preview | GA | GA | Not supported |
-|Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* |
-|Azure Functions - dependencies | Not supported | Not supported | Public Preview | Not supported | Through [extension](monitor-functions.md#distributed-tracing-for-python-function-apps) |
-|Azure Spring Cloud | Not supported | Not supported | GA | Not supported | Not supported |
-|Azure Kubernetes Service (AKS) | N/A | Not supported | Through agent | Not supported | Not supported |
-|Azure VMs Windows | Public Preview | Public Preview | Through agent | Not supported | Not supported |
-|On-Premises VMs Windows | GA, opt-in | Public Preview | Through agent | Not supported | Not supported |
-|Standalone agent - any env. | Not supported | Not supported | GA | Not supported | Not supported |
-
-*OnBD is short for On by Default - the Application Insights will be enabled automatically once you deploy your app in supported environments.
-
-## Azure App Service
-
-### Windows
-
-Application monitoring on Azure App Service on Windows is available for **[ASP.NET](./azure-web-apps-net.md)** (enabled by default), **[ASP.NET Core](./azure-web-apps-net-core.md)**, **[Java](./azure-web-apps-java.md)** (in public preview), and **[Node.js](./azure-web-apps-nodejs.md)** applications. To monitor a Python app, add the [SDK](./opencensus-python.md) to your code.
-
-> [!NOTE]
-> Application monitoring for apps on Windows Containers on App Service [is in public preview for .NET Core, .NET Framework, and Java](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html).
+The table below displays the current state of auto-instrumentation availability.
-### Linux
-You can enable monitoring for **[Java](./azure-web-apps-java.md?)**, **[Node.js](./azure-web-apps-nodejs.md?tabs=linux)**, and **[ASP.NET Core](./azure-web-apps-net-core.md?tabs=linux)(Preview)** apps running on Linux in App Service through the portal.
+Links are provided to additional information for each supported scenario.
-For [Python](./opencensus-python.md), use the SDK.
+|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python |
+|-||||-|-|
+|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: |
+|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |
+|Azure App Service on Linux | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
+|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> |
+|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
+|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
+|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-## Azure Functions
+**Footnotes**
+- <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically.
+- <a name="Preview">2</a>: This feature is in public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+- <a name="Agent">3</a>: An agent must be deployed and configured.
-The basic monitoring for Azure Functions is enabled by default to collect log, performance, error data, and HTTP requests. For Java applications, you can enable richer monitoring with distributed tracing and get the end-to-end transaction details. This functionality for Java is in public preview for Windows and you can [enable it in Azure portal](./monitor-functions.md).
-
-## Azure Spring Cloud
-
-### Java
-Application monitoring for Java apps running in Azure Spring Cloud is integrated into the portal, you can enable Application Insights directly from the Azure portal, both for the existing and newly created Azure Spring Cloud resources.
-
-## Azure Kubernetes Service (AKS)
-
-Codeless instrumentation of Azure Kubernetes Service (AKS) is currently available for Java applications through the [standalone agent](./java-in-process-agent.md).
-
-## Azure Windows VMs and virtual machine scale set
-
-Auto-instrumentation for Azure VMs and virtual machine scale set is available for [.NET](./azure-vm-vmss-apps.md) and [Java](./java-in-process-agent.md) - this experience isn't integrated into the portal. The monitoring is enabled through a few steps with a stand-alone solution and doesn't require any code changes.
-
-## On-premises servers
-You can easily enable monitoring for your [on-premises Windows servers for .NET applications](./status-monitor-v2-overview.md) and for [Java apps](./java-in-process-agent.md).
-
-## Other environments
-The versatile Java standalone agent works on any environment, there's no need to instrument your code. [Follow the guide](./java-in-process-agent.md) to enable Application Insights and read about the amazing capabilities of the Java agent. The agent is in public preview and available on all regions.
+> [!NOTE]
+> Auto-instrumentation was known as "codeless attach" before October 2021.
## Next steps
-* [Application Insights Overview](./app-insights-overview.md)
-* [Application map](./app-map.md)
-* [End-to-end performance monitoring](../app/tutorial-performance.md)
+* [Application Insights Overview](app-insights-overview.md)
+* [Application Insights Overview dashboard](overview-dashboard.md)
+* [Application map](app-map.md)
azure-monitor Java In Process Agent Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md
Whether you are deploying on-premises or in the cloud, you can use Microsoft's O
For more information, see [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications).
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ ## Next steps - [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications)
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
## Application monitoring without instrumenting the code Currently, only Java lets you enable application monitoring without instrumenting the code. To monitor applications in other languages use the SDKs.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ ## Java Once enabled, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics custom filters allow you to control which of your application's tel
- Recommended: Secure Live Metrics channel using [Azure AD authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication) - Legacy (no longer recommended): Set up an authenticated channel by configuring a secret API key as explained below
+> [!NOTE]
+> On 30 September 2025, API keys used to stream live metrics telemetry into application insights will be retired. After that date, applications which use API keys will no longer be able to send live metrics data to your application insights resource. Authenticated telemetry ingestion for live metrics streaming to application insights will need to be done with [Azure AD authentication for application insights](./azure-ad-authentication.md).
+ It's possible to try custom filters without having to set up an authenticated channel. Simply click on any of the filter icons and authorize the connected servers. Notice that if you choose this option, you'll have to authorize the connected servers once every new session or when a new server comes online. > [!WARNING]
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Application Insights collects log, performance, and error data, and automaticall
The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have this key, you can set it manually. For more information read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd).
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Distributed tracing for Java applications (public preview)
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent (formerly named Status Monitor V2) is a PowerShell mo
It replaces Status Monitor. Telemetry is sent to the Azure portal, where you can [monitor](./app-insights-overview.md) your app.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ > [!NOTE] > The module currently supports codeless instrumentation of ASP.NET and ASP.NET Core web apps hosted with IIS. Use an SDK to instrument Java and Node.js applications.
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 10/18/2022 Last updated : 10/20/2022 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Availability zone**
+ This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
+ * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
You can set permissions for a file or folder by using the **Security** tab of th
## Next steps
+* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
* [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 10/18/2022 Last updated : 10/20/2022 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Availability zone**
+ This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
+ * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
This article shows you how to create an NFS volume. For SMB volumes, see [Create
## Next steps
+* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
na Previously updated : 01/05/2022 Last updated : 09/30/2022 # Configure policy-based backups for Azure NetApp Files
To enable a policy-based (scheduled) backup:
2. Select your Azure NetApp Files account. 3. Select **Backups**.
- ![Screenshot that shows how to navigate to Backups option.](../media/azure-netapp-files/backup-navigate.png)
+ :::image type="content" source="../media/azure-netapp-files/backup-navigate.png" alt-text="Screenshot that shows how to navigate to Backups option." lightbox="../media/azure-netapp-files/backup-navigate.png":::
4. Select **Backup Policies**. 5. Select **Add**. 6. In the **Backup Policy** page, specify the backup policy name. Enter the number of backups that you want to keep for daily, weekly, and monthly backups. Click **Save**.
- ![Screenshot that shows the Backup Policy window.](../media/azure-netapp-files/backup-policy-window-daily.png)
-
+ :::image type="content" source="../media/azure-netapp-files/backup-policy-window-daily.png" alt-text="Screenshot that shows the Backup Policy window." lightbox="../media/azure-netapp-files/backup-policy-window-daily.png":::
+
* If you configure and attach a backup policy to the volume without attaching a snapshot policy, the backup does not function properly. There will be only a baseline snapshot transferred to the Azure storage. * For each backup policy that you configure (for example, daily backups), ensure that you have a corresponding snapshot policy configuration (for example, daily snapshots). * Backup policy has a dependency on snapshot policy. If you havenΓÇÖt created snapshot policy yet, you can configure both policies at the same time by selecting the **Create snapshot policy** checkbox on the Backup Policy window.
- ![Screenshot that shows the Backup Policy window with Snapshot Policy selected.](../media/azure-netapp-files/backup-policy-snapshot-policy-option.png)
+ :::image type="content" source="../media/azure-netapp-files/backup-policy-snapshot-policy-option.png" alt-text="Screenshot that shows the Backup Policy window with Snapshot Policy selected." lightbox="../media/azure-netapp-files/backup-policy-snapshot-policy-option.png":::
+ ### Example of a valid configuration
To enable the backup functionality for a volume:
The Vault information is pre-populated.
- ![Screenshot that shows Configure Backups window.](../media/azure-netapp-files/backup-configure-window.png)
+ :::image type="content" source="../media/azure-netapp-files/backup-configure-window.png" alt-text="Screenshot that shows Configure Backups window." lightbox="../media/azure-netapp-files/backup-configure-window.png":::
## Next steps
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 10/18/2022 Last updated : 10/20/2022 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail. * The **Allow local NFS users with LDAP** option in Active Directory connections intends to provide occasional and temporary access to local users. When this option is enabled, user authentication and lookup from the LDAP server stop working, and the number of group memberships that Azure NetApp Files will support will be limited to 16. As such, you should keep this option *disabled* on Active Directory connections, except for the occasion when a local user needs to access LDAP-enabled volumes. In that case, you should disable this option as soon as local user access is no longer required for the volume. See [Allow local NFS users with LDAP to access a dual-protocol volume](#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume) about managing local user access. * Ensure that the NFS client is up to date and running the latest updates for the operating system.
-* Dual-protocol volumes support both Active Directory Domain Services (ADDS) and Azure Active Directory Domain Services (AADDS).
+* Dual-protocol volumes support both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (AADDS).
* Dual-protocol volumes do not support the use of LDAP over TLS with AADDS. See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations). * The NFS version used by a dual-protocol volume can be NFSv3 or NFSv4.1. The following considerations apply: * Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Availability zone**
+ This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
+ * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
## Next steps
+* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
* [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
azure-netapp-files Faq Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-backup.md
Previously updated : 10/11/2021 Last updated : 09/10/2022 # Azure NetApp Files backup FAQs
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
+
+ Title: Manage availability zone volume placement for Azure NetApp Files | Microsoft Docs
+description: Describes how to create a volume with an availability zone by using Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 10/20/2022++
+# Manage availability zone volume placement for Azure NetApp Files
+
+Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice. To better understand availability zones, refer to [Using availability zones for high availability](use-availability-zones.md).
+
+## Requirements and considerations
+
+* The availability zone volume placement feature is supported only on newly created volumes. It is not currently supported on existing volumes.
+
+* This feature does not guarantee free capacity in the availability zone. For example, even if you can deploy a VM in availability zone 3 of the East US region, it doesnΓÇÖt guarantee free Azure NetApp Files capacity in that zone. If no sufficient capacity is available, volume creation will fail.
+
+* After a volume is created with an availability zone, the specified availability zone canΓÇÖt be modified. Volumes canΓÇÖt be moved between availability zones.
+
+* NetApp accounts and capacity pools are not bound by the availability zone. A capacity pool can contain volumes in different availability zones.
+
+* This feature provides zonal volume placement, with latency within the zonal latency envelopes. It does not provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee.
+
+* Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. This feature aligns with the generic logical-to-physical availability zone mapping for the subscription.
+
+* VMs and Azure NetApp Files volumes are to be deployed separately, within the same logical availability zone to create zone alignment between VMs and Azure NetApp Files. The availability zone volume placement feature does not create zonal VMs upon volume creation, or vice versa.
+
+## Register the feature
+
+The feature of availability zone volume placement is currently in preview. If you are using this feature for the first time, you need to register the feature first.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAvailabilityZone
+ ```
+
+2. Check the status of the feature registration:
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAvailabilityZone
+ ```
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+## Create a volume with an availability zone
+
+1. Select **Volumes** from your capacity pool. Then select **+ Add volume** to create a volume.
+
+ For details about volume creation, see:
+ * [Create an NFS volume](azure-netapp-files-create-volumes.md)
+ * [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+ * [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+
+2. In the **Create a Volume** page, under the **Basic** tab, select the **Availability Zone** pulldown to specify an availability zone where Azure NetApp Files resources are present.
+
+ > [!IMPORTANT]
+ > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
+
+ [ ![Screenshot that shows the Availability Zone menu.](../media/azure-netapp-files/availability-zone-menu-drop-down.png) ](../media/azure-netapp-files/availability-zone-menu-drop-down.png#lightbox)
+
+
+3. Follow the UI to create the volume. The **Review + Create** page shows the selected availability zone you specified.
+
+ [ ![Screenshot that shows the Availability Zone review.](../media/azure-netapp-files/availability-zone-display-down.png) ](../media/azure-netapp-files/availability-zone-display-down.png#lightbox)
+
+4. After you create the volume, the **Volume Overview** page includes availability zone information for the volume.
+
+ [ ![Screenshot that shows the Availability Zone volume overview.](../media/azure-netapp-files/availability-zone-volume-overview.png) ](../media/azure-netapp-files/availability-zone-volume-overview.png#lightbox)
+
+> [!IMPORTANT]
+> Once the volume is created using the availability zone volume placement feature, the volume has the same level of support as other volumes deployed in the subscription without this feature enabled. For example, if there is an issue with backup and restore on the volume, it will be supported because the problem is not with the availability zone volume placement feature itself.
+
+## Next steps
+
+* [Use availability zones for high availability](use-availability-zones.md)
+* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
+* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
+
+ Title: Use availability zones for high availability in Azure NetApp Files | Microsoft Docs
+description: Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 10/20/2022++
+# Use availability zones for high availability in Azure NetApp Files
+
+Azure [availability zones](../availability-zones/az-overview.md#availability-zones) are physically separate locations within each supporting Azure region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
+
+>[!IMPORTANT]
+> Availability zones are referred to as _logical zones_. Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription, and the mapping will be different with different subscriptions. Azure subscriptions are automatically assigned this mapping when a subscription is created. Azure NetApp Files aligns with the generic logical-to-physical availability zone mapping for all Azure services for the subscription.
+
+Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Azure availability zones let you design and operate applications and databases that automatically transition between zones without interruption. You can design resilient solutions by using Azure services that use availability zones.
+
+The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/architecture/framework/resiliency/app-design#use-availability-zones-within-a-region). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
+
+Azure NetApp Files lets you deploy volumes in availability zones. The Azure NetApp Files [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in the logical availability zone of your choice, in alignment with Azure compute and other services in the same zone.
+
+Azure NetApp Files deployments will occur in the availability of zone of choice if the Azure NetApp Files is present in that availability zone and if it has sufficient capacity. All VMs within the region in (peered) VNets can access all Azure NetApp Files resources.
+
+>[!IMPORTANT]
+>Azure NetApp Files availability zone volume placement provides zonal placement. It doesn't provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
+
+You can co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones. Many applications are built for HA across multiple availability zones using application-based replication and failover technologies, like [SQL Server Always-On Availability Groups (AOAG)](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server), [SAP HANA with HANA System Replication (HSR)](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md), and [Oracle with Data Guard](../virtual-machines/workloads/oracle/oracle-reference-architecture.md#high-availability-for-oracle-databases).
+
+Latency is subject to availability zone latency for within availability zone access and the regional latency envelope for cross-availability zone access.
+
+## Azure regions with availability zones
+
+For a list of regions that that currently support availability zones, refer to [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+
+## Next steps
+
+* [Manage availability zone volume placement](manage-availability-zone-volume-placement.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/13/2022 Last updated : 10/20/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## October 2022
+* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
+
+ Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well Architected Framework](/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
+ * [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA) The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/concept-security-configuration.md
Title: Azure Percept security recommendations description: Learn more about Azure Percept firewall configuration and security recommendations--++ Last updated 10/04/2022
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-percept-security.md
Title: Azure Percept security description: Learn more about Azure Percept security--++ Last updated 10/06/2022
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* Reportconfigs * Reports * ScheduledActions
+* Settings
* Views ## Microsoft.CustomProviders
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.PolicyInsights * attestations
+* componentPolicyStates
* eventGridFilters * policyEvents * policyStates
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Resources
+* deploymentStacks
* links
+* snapshots
* tags ## Microsoft.Security
An extension resource is a resource that adds to another resource's capabilities
* automationRules * bookmarks * cases
+* contentPackages
+* contentTemplates
* dataConnectorDefinitions * dataConnectors * enrichment
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
* galleries * galleries/images * galleries/images/versions
+* galleries/serviceArtifacts
* images * snapshots * virtualMachines
Some resources have a limit on the number instances per region. This limit is di
* dnszones/SRV * dnszones/TXT * expressRouteCrossConnections
+* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
* networkIntentPolicies * networkInterfaces * networkSecurityGroups
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Security * assignments
+* securityConnectors
## Microsoft.ServiceBus
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | farmBeats | Yes | Yes | > | farmBeats / eventGridFilters | No | No | > | farmBeats / extensions | No | No |
+> | farmBeats / solutions | No | No |
> | farmBeatsExtensionDefinitions | No | No |
+> | farmBeatsSolutionDefinitions | No | No |
## Microsoft.AlertsManagement
To get the same data as a file of comma-separated values, download [tag-support.
> | automationAccounts / privateEndpointConnections | No | No | > | automationAccounts / privateLinkResources | No | No | > | automationAccounts / runbooks | Yes | Yes |
+> | automationAccounts / runtimes | Yes | Yes |
> | automationAccounts / softwareUpdateConfigurationMachineRuns | No | No | > | automationAccounts / softwareUpdateConfigurationRuns | No | No | > | automationAccounts / softwareUpdateConfigurations | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | catalogs / products | No | No | > | catalogs / products / devicegroups | No | No |
-## Microsoft.AzureSphereGen2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | catalogs | Yes | Yes |
-> | catalogs / certificates | No | No |
-> | catalogs / deviceRegistrations | Yes | Yes |
-> | catalogs / provisioningPackages | Yes | Yes |
-
-## Microsoft.AzureSphereV2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | catalogs | Yes | Yes |
-> | catalogs / certificates | No | No |
-> | catalogs / deviceRegistrations | Yes | Yes |
-> | catalogs / provisioningPackages | Yes | Yes |
- ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | galleryImages | Yes | Yes | > | marketplaceGalleryImages | Yes | Yes | > | networkinterfaces | Yes | Yes |
+> | registeredSubscriptions | No | No |
> | storageContainers | Yes | Yes | > | virtualharddisks | Yes | Yes | > | virtualmachines | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts / enrollmentAccounts / billingRoleDefinitions | No | No | > | billingAccounts / enrollmentAccounts / billingSubscriptions | No | No | > | billingAccounts / invoices | No | No |
+> | billingAccounts / invoices / summary | No | No |
> | billingAccounts / invoices / transactions | No | No | > | billingAccounts / invoices / transactionSummary | No | No | > | billingAccounts / invoiceSections | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts / policies | No | No | > | billingAccounts / products | No | No | > | billingAccounts / promotionalCredits | No | No |
+> | billingAccounts / reservationOrders | No | No |
+> | billingAccounts / reservationOrders / reservations | No | No |
> | billingAccounts / reservations | No | No | > | billingAccounts / savingsPlanOrders | No | No | > | billingAccounts / savingsPlanOrders / savingsPlans | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | galleries / applications / versions | Yes | No | > | galleries / images | Yes | No | > | galleries / images / versions | Yes | No |
+> | galleries / serviceArtifacts | Yes | Yes |
> | hostGroups | Yes | Yes | > | hostGroups / hosts | Yes | Yes | > | images | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | Ledgers | Yes | Yes |
+> | ManagedCCF | Yes | Yes |
## Microsoft.Confluent
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | CacheNodes | Yes | Yes | > | enterpriseCustomers | Yes | Yes |
+> | ispCustomers | Yes | Yes |
+> | ispCustomers / ispCacheNodes | Yes | Yes |
## microsoft.connectedopenstack
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Clusters | Yes | Yes |
-> | Datastores | Yes | Yes |
-> | Hosts | Yes | Yes |
-> | ResourcePools | Yes | Yes |
+> | clusters | Yes | Yes |
+> | datastores | Yes | Yes |
+> | hosts | Yes | Yes |
+> | resourcepools | Yes | Yes |
> | VCenters | Yes | Yes |
-> | VCenters / InventoryItems | No | No |
-> | VirtualMachines | Yes | Yes |
+> | vcenters / inventoryitems | No | No |
+> | virtualmachines | Yes | Yes |
> | VirtualMachines / AssessPatches | No | No |
-> | VirtualMachines / Extensions | Yes | Yes |
-> | VirtualMachines / GuestAgents | No | No |
-> | VirtualMachines / HybridIdentityMetadata | No | No |
+> | virtualmachines / extensions | Yes | Yes |
+> | virtualmachines / guestagents | No | No |
+> | virtualmachines / hybrididentitymetadata | No | No |
> | VirtualMachines / InstallPatches | No | No | > | VirtualMachines / UpgradeExtensions | No | No |
-> | VirtualMachineTemplates | Yes | Yes |
-> | VirtualNetworks | Yes | Yes |
+> | virtualmachinetemplates | Yes | Yes |
+> | virtualnetworks | Yes | Yes |
## Microsoft.Consumption
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | jobs | Yes | Yes |
+> | jobs / eventGridFilters | No | No |
## Microsoft.DataBoxEdge
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | ElasticPools | Yes | Yes |
-> | ElasticPools / IotHubTenants | Yes | Yes |
-> | ElasticPools / IotHubTenants / securitySettings | No | No |
> | IotHubs | Yes | Yes | > | IotHubs / eventGridFilters | No | No | > | IotHubs / failover | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | domains / topics | No | No | > | eventSubscriptions | No | No | > | extensionTopics | No | No |
+> | namespaces | Yes | Yes |
> | partnerConfigurations | Yes | Yes | > | partnerDestinations | Yes | Yes | > | partnerNamespaces | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | fluidRelayServers | Yes | Yes | > | fluidRelayServers / fluidRelayContainers | No | No |
+## Microsoft.Graph
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | AzureADApplication | Yes | Yes |
+> | AzureADApplicationPrototype | Yes | Yes |
+> | registeredSubscriptions | No | No |
+ ## Microsoft.GuestConfiguration > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | services / privateLinkResources | No | No | > | validateMedtechMappings | No | No | > | workspaces | Yes | Yes |
+> | workspaces / analyticsconnectors | Yes | Yes |
> | workspaces / dicomservices | Yes | Yes | > | workspaces / eventGridFilters | No | No | > | workspaces / fhirservices | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | instances | Yes | Yes | > | instances / chambers | Yes | Yes | > | instances / chambers / accessProfiles | Yes | Yes |
+> | instances / chambers / fileRequests | No | No |
+> | instances / chambers / files | No | No |
> | instances / chambers / workloads | Yes | Yes | > | instances / consortiums | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | provisionedClusters | Yes | Yes | > | provisionedClusters / agentPools | Yes | Yes | > | provisionedClusters / hybridIdentityMetadata | No | No |
+> | provisionedClusters / upgradeProfiles | No | No |
> | storageSpaces | Yes | Yes | > | virtualNetworks | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | No | > | networkfunctions | Yes | Yes |
-> | networkfunctions / components | No | No |
+> | networkFunctions / components | No | No |
> | networkFunctionVendors | No | No | > | publishers | Yes | Yes | > | publishers / artifactStores | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | actiongroups | Yes | Yes |
+> | actiongroups / networkSecurityPerimeterAssociationProxies | No | No |
+> | actiongroups / networkSecurityPerimeterConfigurations | No | No |
> | activityLogAlerts | Yes | Yes | > | alertrules | Yes | Yes | > | autoscalesettings | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | privateLinkScopes / scopedResources | No | No | > | rollbackToLegacyPricingModel | No | No | > | scheduledqueryrules | Yes | Yes |
+> | scheduledqueryrules / networkSecurityPerimeterAssociationProxies | No | No |
+> | scheduledqueryrules / networkSecurityPerimeterConfigurations | No | No |
> | topology | No | No | > | transactions | No | No | > | webtests | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | loadtests | Yes | Yes |
+> | loadtests / outboundNetworkDependenciesEndpoints | No | No |
+> | registeredSubscriptions | No | No |
## Microsoft.Logic
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | aisysteminventories | Yes | Yes | > | registries | Yes | Yes |
+> | registries / codes | No | No |
+> | registries / codes / versions | No | No |
+> | registries / components | No | No |
+> | registries / components / versions | No | No |
+> | registries / environments | No | No |
+> | registries / environments / versions | No | No |
+> | registries / models | No | No |
+> | registries / models / versions | No | No |
> | virtualclusters | Yes | Yes | > | workspaces | Yes | Yes | > | workspaces / batchEndpoints | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / models / versions | No | No | > | workspaces / onlineEndpoints | Yes | Yes | > | workspaces / onlineEndpoints / deployments | Yes | Yes |
-> | workspaces / registries | Yes | Yes |
> | workspaces / schedules | No | No | > | workspaces / services | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | mediaservices / accountFilters | No | No | > | mediaservices / assets | No | No | > | mediaservices / assets / assetFilters | No | No |
+> | mediaservices / assets / tracks | No | No |
> | mediaservices / contentKeyPolicies | No | No | > | mediaservices / eventGridFilters | No | No | > | mediaservices / graphInstances | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | cloudServicesNetworks | Yes | Yes | > | clusterManagers | Yes | Yes | > | clusters | Yes | Yes |
+> | clusters / admissions | No | No |
> | defaultCniNetworks | Yes | Yes | > | disks | Yes | Yes | > | hybridAksClusters | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Pki | Yes | Yes |
-> | Pkis | Yes | Yes |
-> | Pkis / certificateAuthorities | Yes | Yes |
-> | Pkis / enrollmentPolicies | Yes | Yes |
+> | pkis | Yes | Yes |
+> | pkis / certificateAuthorities | No | No |
+> | pkis / enrollmentPolicies | Yes | Yes |
## Microsoft.PlayFab
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | attestations | No | No |
+> | componentPolicyStates | No | No |
> | eventGridFilters | No | No | > | policyEvents | No | No | > | policyMetadata | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | appliances | Yes | Yes |
+> | telemetryconfig | No | No |
## Microsoft.ResourceGraph
To get the same data as a file of comma-separated values, download [tag-support.
> | deploymentStacks / snapshots | No | No | > | links | No | No | > | resourceGroups | Yes | No |
+> | snapshots | No | No |
> | subscriptions | Yes | No | > | tags | No | No | > | templateSpecs | Yes | Yes | > | templateSpecs / versions | Yes | Yes | > | tenants | No | No |
+> | validateResources | No | No |
## Microsoft.SaaS
To get the same data as a file of comma-separated values, download [tag-support.
> | azureDevOpsConnectors / orgs | No | No | > | azureDevOpsConnectors / orgs / projects | No | No | > | azureDevOpsConnectors / orgs / projects / repos | No | No |
+> | azureDevOpsConnectors / repos | No | No |
+> | azureDevOpsConnectors / stats | No | No |
> | gitHubConnectors | Yes | Yes | > | gitHubConnectors / gitHubRepos | No | No | > | gitHubConnectors / owners | No | No | > | gitHubConnectors / owners / repos | No | No |
+> | gitHubConnectors / repos | No | No |
+> | gitHubConnectors / stats | No | No |
## Microsoft.SecurityInsights
To get the same data as a file of comma-separated values, download [tag-support.
> | automationRules | No | No | > | bookmarks | No | No | > | cases | No | No |
+> | contentPackages | No | No |
+> | contentTemplates | No | No |
> | dataConnectorDefinitions | No | No | > | dataConnectors | No | No | > | enrichment | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | clusters | Yes | Yes | > | clusters / applications | No | No |
+> | clusters / applications / services | No | No |
+> | clusters / applicationTypes | No | No |
+> | clusters / applicationTypes / versions | No | No |
> | edgeclusters | Yes | Yes | > | edgeclusters / applications | No | No | > | managedclusters | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | testBaseAccounts / externalTestTools | No | No | > | testBaseAccounts / externalTestTools / testCases | No | No | > | testBaseAccounts / featureUpdateSupportedOses | No | No |
+> | testBaseAccounts / firstPartyApps | No | No |
> | testBaseAccounts / flightingRings | No | No | > | testBaseAccounts / packages | Yes | Yes | > | testBaseAccounts / packages / favoriteProcesses | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | imageTemplates | Yes | Yes | > | imageTemplates / runOutputs | No | No |
+> | imageTemplates / triggers | No | No |
## microsoft.visualstudio
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | farmBeats | Yes | > | farmBeats / eventGridFilters | No | > | farmBeats / extensions | No |
+> | farmBeats / solutions | No |
> | farmBeatsExtensionDefinitions | No |
+> | farmBeatsSolutionDefinitions | No |
## Microsoft.AlertsManagement
The resources are listed by resource provider namespace. To match a resource pro
> | automationAccounts / privateEndpointConnections | No | > | automationAccounts / privateLinkResources | No | > | automationAccounts / runbooks | Yes |
+> | automationAccounts / runtimes | Yes |
> | automationAccounts / softwareUpdateConfigurationMachineRuns | No | > | automationAccounts / softwareUpdateConfigurationRuns | No | > | automationAccounts / softwareUpdateConfigurations | No |
The resources are listed by resource provider namespace. To match a resource pro
> | catalogs / products | No | > | catalogs / products / devicegroups | No |
-## Microsoft.AzureSphereGen2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | catalogs | Yes |
-> | catalogs / certificates | No |
-> | catalogs / deviceRegistrations | Yes |
-> | catalogs / provisioningPackages | Yes |
-
-## Microsoft.AzureSphereV2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | catalogs | Yes |
-> | catalogs / certificates | No |
-> | catalogs / deviceRegistrations | Yes |
-> | catalogs / provisioningPackages | Yes |
- ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | galleryImages | Yes | > | marketplaceGalleryImages | Yes | > | networkinterfaces | Yes |
+> | registeredSubscriptions | No |
> | storageContainers | Yes | > | virtualharddisks | Yes | > | virtualmachines | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts / enrollmentAccounts / billingRoleDefinitions | No | > | billingAccounts / enrollmentAccounts / billingSubscriptions | No | > | billingAccounts / invoices | No |
+> | billingAccounts / invoices / summary | No |
> | billingAccounts / invoices / transactions | No | > | billingAccounts / invoices / transactionSummary | No | > | billingAccounts / invoiceSections | No |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts / policies | No | > | billingAccounts / products | No | > | billingAccounts / promotionalCredits | No |
+> | billingAccounts / reservationOrders | No |
+> | billingAccounts / reservationOrders / reservations | No |
> | billingAccounts / reservations | No | > | billingAccounts / savingsPlanOrders | No | > | billingAccounts / savingsPlanOrders / savingsPlans | No |
The resources are listed by resource provider namespace. To match a resource pro
> | galleries / applications / versions | Yes | > | galleries / images | Yes | > | galleries / images / versions | Yes |
+> | galleries / serviceArtifacts | Yes |
> | hostGroups | Yes | > | hostGroups / hosts | Yes | > | images | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | Ledgers | Yes |
+> | ManagedCCF | Yes |
## Microsoft.Confluent
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | CacheNodes | Yes | > | enterpriseCustomers | Yes |
+> | ispCustomers | Yes |
+> | ispCustomers / ispCacheNodes | Yes |
## microsoft.connectedopenstack
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Clusters | Yes |
-> | Datastores | Yes |
-> | Hosts | Yes |
-> | ResourcePools | Yes |
+> | clusters | Yes |
+> | datastores | Yes |
+> | hosts | Yes |
+> | resourcepools | Yes |
> | VCenters | Yes |
-> | VCenters / InventoryItems | No |
-> | VirtualMachines | Yes |
+> | vcenters / inventoryitems | No |
+> | virtualmachines | Yes |
> | VirtualMachines / AssessPatches | No |
-> | VirtualMachines / Extensions | Yes |
-> | VirtualMachines / GuestAgents | No |
-> | VirtualMachines / HybridIdentityMetadata | No |
+> | virtualmachines / extensions | Yes |
+> | virtualmachines / guestagents | No |
+> | virtualmachines / hybrididentitymetadata | No |
> | VirtualMachines / InstallPatches | No | > | VirtualMachines / UpgradeExtensions | No |
-> | VirtualMachineTemplates | Yes |
-> | VirtualNetworks | Yes |
+> | virtualmachinetemplates | Yes |
+> | virtualnetworks | Yes |
## Microsoft.Consumption
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | jobs | Yes |
+> | jobs / eventGridFilters | No |
## Microsoft.DataBoxEdge
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | ElasticPools | Yes |
-> | ElasticPools / IotHubTenants | Yes |
-> | ElasticPools / IotHubTenants / securitySettings | No |
> | IotHubs | Yes | > | IotHubs / eventGridFilters | No | > | IotHubs / failover | No |
The resources are listed by resource provider namespace. To match a resource pro
> | domains / topics | No | > | eventSubscriptions | No | > | extensionTopics | No |
+> | namespaces | Yes |
> | partnerConfigurations | Yes | > | partnerDestinations | Yes | > | partnerNamespaces | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | fluidRelayServers | Yes | > | fluidRelayServers / fluidRelayContainers | No |
+## Microsoft.Graph
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | AzureADApplication | Yes |
+> | AzureADApplicationPrototype | Yes |
+> | registeredSubscriptions | No |
+ ## Microsoft.GuestConfiguration > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | services / privateLinkResources | No | > | validateMedtechMappings | No | > | workspaces | Yes |
+> | workspaces / analyticsconnectors | Yes |
> | workspaces / dicomservices | Yes | > | workspaces / eventGridFilters | No | > | workspaces / fhirservices | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | instances | Yes | > | instances / chambers | Yes | > | instances / chambers / accessProfiles | Yes |
+> | instances / chambers / fileRequests | No |
+> | instances / chambers / files | No |
> | instances / chambers / workloads | Yes | > | instances / consortiums | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | provisionedClusters | Yes | > | provisionedClusters / agentPools | Yes | > | provisionedClusters / hybridIdentityMetadata | No |
+> | provisionedClusters / upgradeProfiles | No |
> | storageSpaces | Yes | > | virtualNetworks | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | > | networkfunctions | Yes |
-> | networkfunctions / components | No |
+> | networkFunctions / components | No |
> | networkFunctionVendors | No | > | publishers | Yes | > | publishers / artifactStores | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | actiongroups | Yes |
+> | actiongroups / networkSecurityPerimeterAssociationProxies | No |
+> | actiongroups / networkSecurityPerimeterConfigurations | No |
> | activityLogAlerts | Yes | > | alertrules | Yes | > | autoscalesettings | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | privateLinkScopes / scopedResources | No | > | rollbackToLegacyPricingModel | No | > | scheduledqueryrules | Yes |
+> | scheduledqueryrules / networkSecurityPerimeterAssociationProxies | No |
+> | scheduledqueryrules / networkSecurityPerimeterConfigurations | No |
> | topology | No | > | transactions | No | > | webtests | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | loadtests | Yes |
+> | loadtests / outboundNetworkDependenciesEndpoints | No |
+> | registeredSubscriptions | No |
## Microsoft.Logic
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | aisysteminventories | Yes | > | registries | Yes |
+> | registries / codes | No |
+> | registries / codes / versions | No |
+> | registries / components | No |
+> | registries / components / versions | No |
+> | registries / environments | No |
+> | registries / environments / versions | No |
+> | registries / models | No |
+> | registries / models / versions | No |
> | virtualclusters | Yes | > | workspaces | Yes | > | workspaces / batchEndpoints | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | workspaces / models / versions | No | > | workspaces / onlineEndpoints | Yes | > | workspaces / onlineEndpoints / deployments | Yes |
-> | workspaces / registries | Yes |
> | workspaces / schedules | No | > | workspaces / services | No |
The resources are listed by resource provider namespace. To match a resource pro
> | mediaservices / accountFilters | No | > | mediaservices / assets | No | > | mediaservices / assets / assetFilters | No |
+> | mediaservices / assets / tracks | No |
> | mediaservices / contentKeyPolicies | No | > | mediaservices / eventGridFilters | No | > | mediaservices / graphInstances | No |
The resources are listed by resource provider namespace. To match a resource pro
> | cloudServicesNetworks | Yes | > | clusterManagers | Yes | > | clusters | Yes |
+> | clusters / admissions | No |
> | defaultCniNetworks | Yes | > | disks | Yes | > | hybridAksClusters | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Pki | Yes |
-> | Pkis | Yes |
-> | Pkis / certificateAuthorities | Yes |
-> | Pkis / enrollmentPolicies | Yes |
+> | pkis | Yes |
+> | pkis / certificateAuthorities | No |
+> | pkis / enrollmentPolicies | Yes |
## Microsoft.PlayFab
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | attestations | No |
+> | componentPolicyStates | No |
> | eventGridFilters | No | > | policyEvents | No | > | policyMetadata | No |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | appliances | Yes |
+> | telemetryconfig | No |
## Microsoft.ResourceGraph
The resources are listed by resource provider namespace. To match a resource pro
> | deploymentStacks / snapshots | No | > | links | No | > | resourceGroups | No |
+> | snapshots | No |
> | subscriptions | No | > | tags | No | > | templateSpecs | Yes | > | templateSpecs / versions | Yes | > | tenants | No |
+> | validateResources | No |
## Microsoft.SaaS
The resources are listed by resource provider namespace. To match a resource pro
> | azureDevOpsConnectors / orgs | No | > | azureDevOpsConnectors / orgs / projects | No | > | azureDevOpsConnectors / orgs / projects / repos | No |
+> | azureDevOpsConnectors / repos | No |
+> | azureDevOpsConnectors / stats | No |
> | gitHubConnectors | Yes | > | gitHubConnectors / gitHubRepos | No | > | gitHubConnectors / owners | No | > | gitHubConnectors / owners / repos | No |
+> | gitHubConnectors / repos | No |
+> | gitHubConnectors / stats | No |
## Microsoft.SecurityInsights
The resources are listed by resource provider namespace. To match a resource pro
> | automationRules | No | > | bookmarks | No | > | cases | No |
+> | contentPackages | No |
+> | contentTemplates | No |
> | dataConnectorDefinitions | No | > | dataConnectors | No | > | enrichment | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | clusters | Yes | > | clusters / applications | No |
+> | clusters / applications / services | No |
+> | clusters / applicationTypes | No |
+> | clusters / applicationTypes / versions | No |
> | edgeclusters | Yes | > | edgeclusters / applications | No | > | managedclusters | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | testBaseAccounts / externalTestTools | No | > | testBaseAccounts / externalTestTools / testCases | No | > | testBaseAccounts / featureUpdateSupportedOses | No |
+> | testBaseAccounts / firstPartyApps | No |
> | testBaseAccounts / flightingRings | No | > | testBaseAccounts / packages | Yes | > | testBaseAccounts / packages / favoriteProcesses | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | imageTemplates | Yes | > | imageTemplates / runOutputs | No |
+> | imageTemplates / triggers | No |
## microsoft.visualstudio
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
When creating a new paid account, you need to connect the Azure Video Indexer ac
## Limited access features
-This section talks about limited access features in Azure Video Indexer.
-
-|When did I create the account?|Trial account (free)| Paid account <br/>(classic or ARM-based)|
-||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
-|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).|
-
-\*In Brazil South we also disabled the face detection.
For more information, see [Azure Video Indexer limited access features](limited-access-features.md).
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Azure Video Indexer supports embedding widgets in your apps. For more informatio
## Next steps - [overview](video-indexer-overview.md)-- [Insights](video-indexer-output-json-v2.md)
+- Once you [set up](video-indexer-get-started.md), start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
+
+ Title: Azure Video Indexer insights overview
+description: This article gives a brief overview of Azure Video Indexer insights.
+ Last updated : 10/19/2022+++
+# Azure Video Indexer insights
+
+Insights contain an aggregated view of the data: faces, topics, emotions. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. For more information about available models, see [overview](video-indexer-overview.md).
++
+The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project. Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows. For more information, see [Use editor to create projects](use-editor-create-project.md).
+
+Once you are [set up](video-indexer-get-started.md) with Azure Video Indexer, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
The Azure Video Indexer service is made available to customers and partners unde
## Limited access features
-This section talks about limited access features in Azure Video Indexer.
-
-|When did I create the account?|Trial Account (Free)| Paid Account <br/>(classic or ARM-based)|
-||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
-|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).|
-
-\*In Brazil South we also disabled the face detection.
## Help and support
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
See the [input container/file formats](/azure/media-services/latest/encode-media
After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
-For more details, see [Upload and index videos](upload-index-videos.md).
+## Start using insights
-To start using the APIs, see [use APIs](video-indexer-use-apis.md)
+For more details, see [Upload and index videos](upload-index-videos.md) and check out other **How to guides**.
## Next steps
-* For the API integration, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
* To embed widgets, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-* Also, check out our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
+* For the API integration, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
+* Check out our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
At the end of the workshop, you'll have a good understanding of the kind of information that can be extracted from video and audio content, you'll be more prepared to identify opportunities related to content intelligence, pitch video AI on Azure, and demo several scenarios on Azure Video Indexer.
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
When indexing by one channel, partial result for those models will be available.
Learn how to [get started with Azure Video Indexer](video-indexer-get-started.md).
+Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
+ ## Next steps You're ready to get started with Azure Video Indexer. For more information, see the following articles:
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 09/07/2022 Last updated : 10/20/2022
Error code: UserErrorRequestDisallowedByPolicy <BR> Error message: An invalid p
If you have an Azure Policy that [governs tags within your environment](../governance/policy/tutorials/govern-tags.md), either consider changing the policy from a [Deny effect](../governance/policy/concepts/effects.md#deny) to a [Modify effect](../governance/policy/concepts/effects.md#modify), or create the resource group manually according to the [naming schema required by Azure Backup](./backup-during-vm-creation.md#azure-backup-resource-group-for-virtual-machines).
+### UserErrorUnableToOpenMount
+
+**Error code**: UserErrorUnableToOpenMount
+
+**Cause**: Backups failed because the backup extensions on the VM were unable to open the mount points in the VM.
+
+**Recommended action**: The backup extension on the VM must be able to access all mount points in the VM to determine the underlying disks, take snapshot, and calculate the size. Ensure that all mount points are accessible.
+ ## Jobs | Error details | Workaround |
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
By default, the results are stored in a container managed by Microsoft. When the
::: zone pivot="rest-api"
-The [GetTranscriptionsFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
+The [GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
Depending in part on the request parameters set when you created the transcripti
- [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Create a batch transcription](batch-transcription-create.md)
+- [Create a batch transcription](batch-transcription-create.md)
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
## Speech Devices SDK 1.11.0: - Support for arbitrary microphone array geometries and setting the working angle through a [configuration file](https://aka.ms/sdsdk-micarray-json).-- Support for [Urbetter DDK](http://www.urbetter.com/products_56/278.html).
+- Support for [Urbetter DDK](https://urbetters.com/collections).
- Released binaries for the [GGEC Speaker](https://aka.ms/sdsdk-download-speaker) used in our [Voice Assistant sample](https://aka.ms/sdsdk-speaker). - Released binaries for [Linux ARM32](https://aka.ms/sdsdk-download-linux-arm32) and [Linux ARM 64](https://aka.ms/sdsdk-download-linux-arm64) for Raspberry Pi and similar devices. - Updated the [Speech SDK](./speech-sdk.md) component to version 1.11.0. For more information, see its [release notes](./releasenotes.md).
cognitive-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md
Title: CI/CD for Custom Speech - Speech service
description: Apply DevOps with Custom Speech and CI/CD workflows. Implement an existing DevOps solution for your own project. -+ Last updated 05/08/2022-+ # CI/CD for Custom Speech
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Training with plain text or structured text usually finishes within a few minute
> > Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) REST API.
+If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) REST API.
## Consider datasets by scenario
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Copying a model directly to a project in another region is not supported with th
::: zone pivot="rest-api"
-To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
To connect a new model to a project of the Speech resource where the model was c
- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
-Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
```azurecli-interactive curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
You should create Speech Service resources in both a main and a secondary region
Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails. 1. Create your custom model in one main region (Primary).
-2. Run the [CopyModelToSubscriptionToSubscription](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
+2. Run the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md). - If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
Check the [public voices available](language-support.md?tabs=stt-tts). You can a
Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
-During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
+During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
The following limits are observed for the conversational language understanding.
|Item|Lower Limit| Upper Limit | | | | |
-|Count of utterances per project | 1 | 15,000|
+|Count of utterances per project | 1 | 25,000|
|Utterance length in characters | 1 | 500 | |Count of intents per project | 1 | 500| |Count of entities per project | 1 | 500|
cognitive-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
A resolution is a standard format for an entity. Entities can be expressed in va
You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that will be provided to a meeting scheduling system.
+> [!NOTE]
+> Entity resolution responses are only supported starting from **_api-version=2022-10-01-preview_** and **_"modelVersion": "2022-10-01-preview"_**.
+ This article documents the resolution objects returned for each entity category or subcategory. ## Age
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Use this article to learn which natural languages are supported by the NER featu
> [!NOTE] > * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
-> * Only "Person", "Location" and "Organization" entities are returned for languages marked with *.
-> * The current model version for NER is `2021-06-01`.
+> * The language support below is for model version `2022-10-01-preview`.
## NER language support
-| Language | Language code | Starting with model version: | Supports entity resolution | Notes |
-|:-|:-:|:-:|:--:|::|
-| Arabic* | `ar` | 2019-10-01 | | |
-| Chinese-Simplified | `zh-hans` | 2021-01-15 | Γ£ô | `zh` also accepted |
-| Chinese-Traditional* | `zh-hant` | 2019-10-01 | | |
-| Czech* | `cs` | 2019-10-01 | | |
-| Danish* | `da` | 2019-10-01 | | |
-| Dutch* | `nl` | 2019-10-01 | Γ£ô | |
-| English | `en` | 2019-10-01 | Γ£ô | |
-| Finnish* | `fi` | 2019-10-01 | | |
-| French | `fr` | 2021-01-15 | Γ£ô | |
-| German | `de` | 2021-01-15 | Γ£ô | |
-| Hebrew | `he` | 2022-10-01 | | |
-| Hindi | `hi` | 2022-10-01 | Γ£ô | |
-| Hungarian* | `hu` | 2019-10-01 | | |
-| Italian | `it` | 2021-01-15 | Γ£ô | |
-| Japanese | `ja` | 2021-01-15 | Γ£ô | |
-| Korean | `ko` | 2021-01-15 | | |
-| Norwegian (Bokmål)* | `no` | 2019-10-01 | | `nb` also accepted |
-| Polish* | `pl` | 2019-10-01 | | |
-| Portuguese (Brazil) | `pt-BR` | 2021-01-15 | Γ£ô | |
-| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | | `pt` also accepted |
-| Russian* | `ru` | 2019-10-01 | | |
-| Spanish | `es` | 2020-04-01 | Γ£ô | |
-| Swedish* | `sv` | 2019-10-01 | | |
-| Turkish* | `tr` | 2019-10-01 | Γ£ô | |
+|Language |Language code|Supports resolution|Notes |
+||-|--||
+|Arabic |`ar` | | |
+|Chinese-Simplified |`zh-hans` |Γ£ô |`zh` also accepted|
+|Chinese-Traditional |`zh-hant` | | |
+|Czech |`cs` | | |
+|Danish |`da` | | |
+|Dutch |`nl` |Γ£ô | |
+|English |`en` |Γ£ô | |
+|Finnish |`fi` | | |
+|French |`fr` |Γ£ô | |
+|German |`de` |Γ£ô | |
+|Hebrew |`he` | | |
+|Hindi |`hi` |Γ£ô | |
+|Hungarian |`hu` | | |
+|Italian |`it` |Γ£ô | |
+|Japanese |`ja` |Γ£ô | |
+|Korean |`ko` | | |
+|Norwegian (Bokmål) |`no` | |`nb` also accepted|
+|Polish |`pl` | | |
+|Portuguese (Brazil) |`pt-BR` |Γ£ô | |
+|Portuguese (Portugal)|`pt-PT` | |`pt` also accepted|
+|Russian |`ru` | | |
+|Spanish |`es` |Γ£ô | |
+|Swedish |`sv` | | |
+|Turkish |`tr` |Γ£ô | |
+ ## Next steps
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
The **SMS** tab displays the operations and results for SMS usage through an Azu
:::image type="content" source="media\workbooks\sms.png" alt-text="SMS tab"::: The **Email** tab displays delivery status, email size, and email count:
-[Screenshot displays email count, size and email delivery status level that illustrate email insights]
## Editing dashboards
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
Communication Services offers the following types of logs that you can enable:
| SdkType | The SDK type used in the request. | | PlatformType | The platform type used in the request. | | Method | The method used in the request. |
+|NumberType| The type of number, the SMS message is being sent from. It can be either **LongCodeNumber** or **ShortCodeNumber** |
### Authentication operational logs
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Title: Service limits for Azure Communication Services description: Learn how to-+ -+ Last updated 11/01/2021
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Title: Azure Communication Services Call Recording overview description: Provides an overview of the Call Recording feature and APIs.-+ -+ Last updated 06/30/2021
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/quick-create-identity.md
Title: Quickstart - Quickly create Azure Communication Services identities for testing description: Learn how to use the Identities & Access Tokens tool in the Azure portal to use with samples and for troubleshooting.-+ -+ Last updated 07/19/2021
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
Title: Azure Communication Services Call Recording API quickstart
description: Provides a quickstart sample for the Call Recording APIs. -+ -+ Last updated 06/30/2021
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
- Title: Record and download calls with Event Grid - An Azure Communication Services quickstart-
-description: In this quickstart, you'll learn how to record and download calls using Event Grid.
---- Previously updated : 06/30/2021------
-# Record and download calls with Event Grid
--
-Get started with Azure Communication Services by recording your Communication Services calls using Azure Event Grid.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource. [Create a Communication Services resource](../create-communication-resource.md?pivots=platform-azp&tabs=windows).-- The [`Microsoft.Azure.EventGrid`](https://www.nuget.org/packages/Microsoft.Azure.EventGrid/) NuGet package.-
-## Create a webhook and subscribe to the recording events
-We'll use *webhooks* and *events* to facilitate call recording and media file downloads.
-
-First, we'll create a webhook. Your Communication Services resource will use Event Grid to notify this webhook when the `recording` event is triggered, and then again when recorded media is ready to be downloaded.
-
-You can write your own custom webhook to receive these event notifications. It's important for this webhook to respond to inbound messages with the validation code to successfully subscribe the webhook to the event service.
-
-```csharp
-[HttpPost]
-public async Task<ActionResult> PostAsync([FromBody] object request)
- {
- //Deserializing the request
- var eventGridEvent = JsonConvert.DeserializeObject<EventGridEvent[]>(request.ToString())
- .FirstOrDefault();
- var data = eventGridEvent.Data as JObject;
-
- // Validate whether EventType is of "Microsoft.EventGrid.SubscriptionValidationEvent"
- if (string.Equals(eventGridEvent.EventType, EventTypes.EventGridSubscriptionValidationEvent, StringComparison.OrdinalIgnoreCase))
- {
- var eventData = data.ToObject<SubscriptionValidationEventData>();
- var responseData = new SubscriptionValidationResponseData
- {
- ValidationResponse = eventData.ValidationCode
- };
- if (responseData.ValidationResponse != null)
- {
- return Ok(responseData);
- }
- }
-
- // Implement your logic here.
- ...
- ...
- }
-```
-
-The above code depends on the `Microsoft.Azure.EventGrid` NuGet package. To learn more about Event Grid endpoint validation, visit the [endpoint validation documentation](../../../event-grid/receive-events.md#endpoint-validation)
-
-We'll then subscribe this webhook to the `recording` event:
-
-1. Select the `Events` blade from your Azure Communication Services resource.
-2. Select `Event Subscription` as shown below.
-![Screenshot showing event grid UI](./media/call-recording/image1-event-grid.png)
-3. Configure the event subscription and select `Call Recording File Status Update` as the `Event Type`. Select `Webhook` as the `Endpoint type`.
-![Create Event Subscription](./media/call-recording/image2-create-event-subscription.png)
-4. Input your webhook's URL into `Subscriber Endpoint`.
-![Subscribe to Event](./media/call-recording/image3-subscribe-to-event.png)
-
-Your webhook will now be notified whenever your Communication Services resource is used to record a call.
-
-## Notification schema
-When the recording is available to download, your Communication Services resource will emit a notification with the following event schema. The document IDs for the recording can be fetched from the `documentId` fields of each `recordingChunk`.
-
-```json
-{
- "id": string, // Unique guid for event
- "topic": string, // Azure Communication Services resource id
- "subject": string, // /recording/call/{call-id}
- "data": {
- "recordingStorageInfo": {
- "recordingChunks": [
- {
- "documentId": string, // Document id for retrieving from AMS storage
- "index": int, // Index providing ordering for this chunk in the entire recording
- "endReason": string, // Reason for chunk ending: "SessionEnded",ΓÇ»"ChunkMaximumSizeExceededΓÇ¥, etc.
- }
- ]
- },
- "recordingStartTime": string, // ISO 8601 date time for the start of the recording
- "recordingDurationMs": int, // Duration of recording in milliseconds
- "sessionEndReason": string // Reason for call ending: "CallEnded",ΓÇ»"InitiatorLeftΓÇ¥, etc.
- },
- "eventType": string, // "Microsoft.Communication.RecordingFileStatusUpdated"
- "dataVersion": string, // "1.0"
- "metadataVersion": string, // "1"
- "eventTime": string // ISO 8601 date time for when the event was created
-}
-
-```
-
-## Download the recorded media files
-
-Once we get the document ID for the file we want to download, we'll call the below Azure Communication Services APIs to download the recorded media and metadata using HMAC authentication.
-
-The maximum recording file size is 1.5GB. When this file size is exceeded, the recorder will automatically split recorded media into multiple files.
-
-The client should be able to download all media files with a single request. If there's an issue, the client can retry with a range header to avoid redownloading segments that have already been downloaded.
-
-To download recorded media:
-- Method: `GET` -- URL: https://contoso.communication.azure.com/recording/download/{documentId}?api-version=2021-04-15-preview1-
-To download recorded media metadata:
-- Method: `GET` -- URL: https://contoso.communication.azure.com/recording/download/{documentId}/metadata?api-version=2021-04-15-preview1--
-### Authentication
-To download recorded media and metadata, use HMAC authentication to authenticate the request against Azure Communication Services APIs.
-
-Create an `HttpClient` and add the necessary headers using the `HmacAuthenticationUtils` provided below:
-
-```csharp
- var client = new HttpClient();
-
- // Set Http Method
- var method = HttpMethod.Get;
- StringContent content = null;
-
- // Build request
- var request = new HttpRequestMessage
- {
- Method = method, // Http GET method
- RequestUri = new Uri(<Download_Recording_Url>), // Download recording Url
- Content = content // content if required for POST methods
- };
-
- // Question: Why do we need to pass String.Empty to CreateContentHash() method?
- // Answer: In HMAC authentication, the hash of the content is one of the parameters used to generate the HMAC token.
- // In our case our recording download APIs are GET methods and do not have any content/body to be passed in the request.
- // However in this case we still need the SHA256 hash for the empty content and hence we pass an empty string.
--
- string serializedPayload = string.Empty;
-
- // Hash the content of the request.
- var contentHashed = HmacAuthenticationUtils.CreateContentHash(serializedPayload);
-
- // Add HMAC headers.
- HmacAuthenticationUtils.AddHmacHeaders(request, contentHashed, accessKey, method);
-
- // Make a request to the Azure Communication Services APIs mentioned above
- var response = await client.SendAsync(request).ConfigureAwait(false);
-```
-
-#### HmacAuthenticationUtils
-The below utilities can be used to manage your HMAC workflow.
-
-**Create content hash**
-
-```csharp
-public static string CreateContentHash(string content)
-{
- var alg = SHA256.Create();
-
- using (var memoryStream = new MemoryStream())
- using (var contentHashStream = new CryptoStream(memoryStream, alg, CryptoStreamMode.Write))
- {
- using (var swEncrypt = new StreamWriter(contentHashStream))
- {
- if (content != null)
- {
- swEncrypt.Write(content);
- }
- }
- }
-
- return Convert.ToBase64String(alg.Hash);
-}
-```
-
-**Add HMAC headers**
-
-```csharp
-public static void AddHmacHeaders(HttpRequestMessage requestMessage, string contentHash, string accessKey)
-{
- var utcNowString = DateTimeOffset.UtcNow.ToString("r", CultureInfo.InvariantCulture);
- var uri = requestMessage.RequestUri;
- var host = uri.Authority;
- var pathAndQuery = uri.PathAndQuery;
-
- var stringToSign = $"{requestMessage.Method}\n{pathAndQuery}\n{utcNowString};{host};{contentHash}";
- var hmac = new HMACSHA256(Convert.FromBase64String(accessKey));
- var hash = hmac.ComputeHash(Encoding.ASCII.GetBytes(stringToSign));
- var signature = Convert.ToBase64String(hash);
- var authorization = $"HMAC-SHA256 SignedHeaders=date;host;x-ms-content-sha256&Signature={signature}";
-
- requestMessage.Headers.Add("x-ms-content-sha256", contentHash);
- requestMessage.Headers.Add("Date", utcNowString);
- requestMessage.Headers.Add("Authorization", authorization);
-}
-```
-
-## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md?pivots=platform-azp&tabs=windows#clean-up-resources).
--
-## Next steps
-For more information, see the following articles:
--- Check out our [web calling sample](../../samples/web-calling-sample.md)-- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
confidential-ledger Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authentication-azure-ad.md
To do so, the client performs a two-steps process:
Azure confidential ledger then executes the request on behalf of the security principal for which Azure AD issued the access token. All authorization checks are performed using this identity. In most cases, the recommendation is to use one of Azure confidential ledger SDKs to access the service programmatically, as they remove much of the hassle of implementing the
-flow above (and much more). See, for example, the [Python client library](https://pypi.org/project/azure-confidentialledger/) and [.NET client library](/dotnet/api/overview/azure/storage.confidentialledger-readme-pre).
+flow above (and much more). See, for example, the [Python client library](https://pypi.org/project/azure-confidentialledger/) and [.NET client library](/dotnet/api/azure.security.confidentialledger).
The main authenticating scenarios are:
For detailed steps on registering an Azure confidential ledger application with
At the end of registration, the application owner gets the following values: -- An **Application ID** (also known as the AAD Client ID or appID)
+- An **Application ID** (also known as the Azure Active Directory Client ID or appID)
- An **authentication key** (also known as the shared secret). The application must present both these values to Azure Active Directory to get a token.
This flow is called the[OAuth2 token exchange flow](https://tools.ietf.org/html/
- [Integrating applications with Azure Active Directory](../active-directory/develop/quickstart-register-app.md) - [Use portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) - [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).-- [Authenticating Azure confidential ledger nodes](authenticate-ledger-nodes.md)
+- [Authenticating Azure confidential ledger nodes](authenticate-ledger-nodes.md)
confidential-ledger Quickstart Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-net.md
Get started with the Azure confidential ledger client library for .NET. [Azure c
Azure confidential ledger client library resources:
-[API reference documentation](/dotnet/api/overview/azure/security.confidentialledger-readme-pre) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/confidentialledger/Azure.Security.ConfidentialLedger) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Security.ConfidentialLedger/1.0.0)
+[API reference documentation](/dotnet/api/overview/azure/security.confidentialledger-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/confidentialledger/Azure.Security.ConfidentialLedger) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Security.ConfidentialLedger/1.0.0)
## Prerequisites
dotnet add package Azure.Identity
## Object model
-The Azure confidential ledger client library for .NET allows you to create an immutable ledger entry in the service. The [Code examples](#code-examples) section shows how to create a write to the ledger and retrieve the transaction id.
+The Azure confidential ledger client library for .NET allows you to create an immutable ledger entry in the service. The [Code examples](#code-examples) section shows how to create a write to the ledger and retrieve the transaction ID.
## Code examples
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
The following configurations aren't restored after the point-in-time recovery:
* Consistency settings. By default, the account is restored with session consistency. ΓÇâ * Regions. * Stored procedures, triggers, UDFs.
+* Role-based access control assignments. These will need to be re-assigned.
You can add these configurations to the restored account after the restore is completed.
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
To migrate an Azure Cosmos DB account from using IP firewall rules to using virt
After an Azure Cosmos DB account is configured for a service endpoint for a subnet, each request from that subnet is sent differently to Azure Cosmos DB. The requests are sent with virtual network and subnet source information instead of a source public IP address. These requests will no longer match an IP filter configured on the Azure Cosmos DB account, which is why the following steps are necessary to avoid downtime.
-Before proceeding, enable the Azure Cosmos DB service endpoint on the virtual network and subnet using the step shown above in "Enable the service endpoint for an existing subnet of a virtual network".
- 1. Get virtual network and subnet information: ```powershell
Before proceeding, enable the Azure Cosmos DB service endpoint on the virtual ne
1. Repeat the previous steps for all Azure Cosmos DB accounts accessed from the subnet.
+1. Enable the Azure Cosmos DB service endpoint on the virtual network and subnet using the step shown in the [Enable the service endpoint for an existing subnet of a virtual network](#configure-using-powershell) section of this article.
+ 1. Remove the IP firewall rule for the subnet from the Azure Cosmos DB account's Firewall rules. ## Frequently asked questions
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
The container copy job will run in the write region. If there are accounts confi
The account's write region may change in the rare scenario of a region outage or due to manual failover. In such a scenario, incomplete container copy jobs created within the account would fail. You would need to recreate these failed jobs. Recreated jobs would then run in the new (current) write region.
-### Why is a new database *_datatransferstate* created in the account when I run container copy jobs? Am I being charged for this database?
-* *_datatransferstate* is a database that is created while running container copy jobs. This database is used by the platform to store the state and progress of the copy job.
+### Why is a new database *__datatransferstate* created in the account when I run container copy jobs? Am I being charged for this database?
+* *__datatransferstate* is a database that is created while running container copy jobs. This database is used by the platform to store the state and progress of the copy job.
* The database uses manual provisioned throughput of 800 RUs. You'll be charged for this database.
-* Deleting this database will remove the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform will not clean up the *_datatransferstate* database automatically.
+* Deleting this database will remove the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform will not clean up the *__datatransferstate* database automatically.
## Supported regions
Make sure the target container is created before running the job as specified in
* Error - Shared throughput database creation is not supported for serverless accounts Job creation on serverless accounts may fail with the error *"Shared throughput database creation is not supported for serverless accounts"*.
-As a work-around, create a database called *_datatransferstate* manually within the account and try creating the container copy job again.
+As a work-around, create a database called *__datatransferstate* manually within the account and try creating the container copy job again.
``` ERROR: (BadRequest) Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx; Reason: (Shared throughput database creation is not supported for serverless accounts.
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md
Use the following steps to run the emulator on Linux:
|Name |Default |Description | ||||
-| Ports: `-p` | | Currently, only ports 8081 and 10251-10255 are needed by the emulator endpoint. |
+| Ports: `-p` | | Currently, only ports `8081` and `10250-10255` are needed by the emulator endpoint. |
| `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | | Memory: `-m` | | On memory, 3 GB or more is required. | | Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least four cores are recommended. |
This section provides tips to troubleshoot errors when using the Linux emulator.
- Verify that the specific emulator container is in a running state. -- Verify that no other applications are using emulator ports: 8081 and 10250-10255.
+- Verify that no other applications are using emulator ports: `8081` and `10250-10255`.
-- Verify that the container port 8081, is mapped correctly and accessible from an environment outside of the container.
+- Verify that the container port `8081`, is mapped correctly and accessible from an environment outside of the container.
```bash netstat -lt
When reporting an issue with the Linux emulator, provide as much information as
- Description of the workload - Sample of the database/collection and item used - Include the console output from starting the Docker container for the emulator in attached mode-- Send all of the above to [Azure Cosmos DB team](mailto:cdbportalfeedback@microsoft.com).
+- Post feedback on our [Azure Cosmos DB Q&A forums](/answers/topics/azure-cosmos-db.html).
## Next steps
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
Previously updated : 06/01/2022 Last updated : 10/20/2022
In this step, you'll query the document endpoint for the API for NoSQL account.
## Grant access to your Azure Cosmos DB account
-In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Azure Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
+In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity for control-plane access. For data-plane access, you'll create a new custom role with acess to read metadata.
> [!TIP]
-> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Built-in Data Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
+> For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
1. Use ``az cosmosdb show`` with the **query** parameter set to ``id``. Store the result in a shell variable named ``scope``.
In this step, you'll assign a role to the function app's system-assigned managed
## Programmatically access the Azure Cosmos DB keys
-We now have a function app that has a system-assigned managed identity with the **Cosmos DB Built-in Data Reader** role. The following function app will query the Azure Cosmos DB account for a list of databases.
+We now have a function app that has a system-assigned managed identity with the custom role. The following function app will query the Azure Cosmos DB account for a list of databases.
1. Create a local function project with the ``--dotnet`` parameter in a folder named ``csmsfunc``. Change your shell's directory
cosmos-db Connect Using Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongoose.md
After you create the database, you'll use the name in the `COSMOSDB_DBNAME` envi
3. Install the necessary packages using one of the ```npm install``` options:
- * Mongoose: ```npm install mongoose@5 --save```
+ * **Mongoose**: ```npm install mongoose@5.13.15 --save```
- > [!Note]
- > The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions.
+ > [!IMPORTANT]
+ > The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions. Azure Cosmos DB for MongoDB is compatible with up to version `5.13.15` of Mongoose. For more information, please see the [issue discussion](https://github.com/Automattic/mongoose/issues/11072) in the Mongoose GitHub repository.
- * Dotenv (if you'd like to load your secrets from an .env file): ```npm install dotenv --save```
+ * **Dotenv** *(if you'd like to load your secrets from an .env file)*: ```npm install dotenv --save```
>[!Note] > The ```--save``` flag adds the dependency to the package.json file.
cosmos-db Tutorial Develop Nodejs Part 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-5.md
Mongoose is an object data modeling (ODM) library for MongoDB and Node.js. You c
1. Install the mongoose npm module, which is an API that's used to talk to MongoDB. ```bash
- npm i mongoose --save
+ npm install mongoose@5.13.15 --save
```
+ > [!IMPORTANT]
+ > Azure Cosmos DB for MongoDB is compatible with up to version `5.13.15` of Mongoose. For more information, please see the [issue discussion](https://github.com/Automattic/mongoose/issues/11072) in the Mongoose GitHub repository.
+ 1. In the **server** folder, create a file named **mongo.js**. You'll add the connection details of your Azure Cosmos DB account to this file. 1. Copy the following code into the **mongo.js** file. The code provides the following functionality:
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
To learn how to query using this newly enabled feature visit [advanced queries](advanced-queries.md). ## Next steps+
+* For a reference of the log and metric data, see [monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
+ * For more information on how to query resource-specific tables see [troubleshooting using resource-specific tables](monitor-logs-basic-queries.md#resource-specific-queries). * For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](monitor-logs-basic-queries.md#azure-diagnostics-queries).
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md
Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure C
For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+> [!IMPORTANT]
+> Setting `EnableContentResponseOnWrite` to `false` will also disable the response from a trigger operation.
+ ## Next steps For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
cosmos-db How To Manage Conflicts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-conflicts.md
udp_collection = self.try_create_document_collection(
## Create a custom conflict resolution policy using a stored procedure
-These samples show how to set up a container with a custom conflict resolution policy with a stored procedure to resolve the conflict. These conflicts don't show up in the conflict feed unless there's an error in your stored procedure. After the policy is created with the container, you need to create the stored procedure. The .NET SDK sample below shows an example. This policy is supported on NoSQL Api only.
+These samples show how to set up a container with a custom conflict resolution policy. This policy uses the logic in a stored procedure to resolve the conflict. If a stored procedure is designated to resolve conflicts, conflicts won't show up in the conflict feed unless there's an error in the designated stored procedure.
+
+After the policy is created with the container, you need to create the stored procedure. The .NET SDK sample below shows an example of this workflow. This policy is supported in the API for NoSQL only.
### Sample custom conflict resolution stored procedure
After your container is created, you must create the `resolver` stored procedure
## Create a custom conflict resolution policy
-These samples show how to set up a container with a custom conflict resolution policy. These conflicts show up in the conflict feed.
+These samples show how to set up a container with a custom conflict resolution policy. With this implementation, each conflict will show up in the conflict feed. It's up to you to handle the conflicts individually from the conflict feed.
### <a id="create-custom-conflict-resolution-policy-dotnet"></a>.NET SDK
manual_collection = client.CreateContainer(database['_self'], collection)
## Read from conflict feed
-These samples show how to read from a container's conflict feed. Conflicts show up in the conflict feed only if they weren't resolved automatically or if using a custom conflict policy.
+These samples show how to read from a container's conflict feed. Conflicts may show up in the conflict feed only for a couple of reasons:
+
+- The conflict was not resolved automatically
+- The conflict caused an error with the designated stored procedure
+- The conflict resolution policy is set to **custom** and does not designate a stored procedure to handle conflicts
### <a id="read-from-conflict-feed-dotnet"></a>.NET SDK
cosmos-db Pagination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/pagination.md
Here are some examples for processing results from queries with multiple pages:
## Continuation tokens
-In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. For the Python SDK and Node.js SDK, it's supported for single partition queries, and the PK must be specified in the options object because it's not sufficient to have it in the query itself.
+In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. For the Python SDK, continuation tokens are only supported for single partition queries. The partition key must be specified in the options object because it's not sufficient to have it in the query itself.
Here are some example for using continuation tokens:
cosmos-db Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-run-queries.md
SELECT date_trunc('hour', created_at) AS hour,
sum((payload->>'distinct_size')::int) AS num_commits FROM github_events WHERE event_type = 'PushEvent' AND
- payload @> '{"ref":"refs/heads/main"}'
+ payload @> '{"ref":"refs/heads/master"}'
GROUP BY hour ORDER BY hour; ```
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 08/02/2022 Last updated : 10/19/2022 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
### Citus extension > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 |
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 | 11.1.3 |
### Data types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
-> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
-> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 |
-> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
-> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
-> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 |
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 | 2.16 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 | 1.4.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | 2.5.0 |
### Full-text search extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Functions extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
-> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 |
-> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 |
-> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
-> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
-> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 | 4.7.0 |
+> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Index types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 | 1.7 |
### Language extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Miscellaneous extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
-> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 |
-> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
-> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 |
-> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
-> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
-> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 | 1.4 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### PostGIS extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> |||||| > | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
## pg_stat_statements
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 # Supported database versions in Azure Cosmos DB for PostgreSQL
customizable during creation. Azure Cosmos DB for PostgreSQL currently supports
following major [PostgreSQL versions](https://www.postgresql.org/docs/release/):
+### PostgreSQL version 15
+
+The current minor release is 15.0. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/15.0/) to
+learn more about improvements and fixes in this minor release.
+ ### PostgreSQL version 14 The current minor release is 14.5. Refer to the [PostgreSQL
policy](https://www.postgresql.org/support/versioning/).
| Version | What's New | Supported since | Retirement date (Azure)| | - | - | | - |
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | May 7, 2019 | November 9, 2023 |
-| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Apr 6, 2021 | November 14, 2024
-| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | November 12, 2026
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | May 7, 2019 | Nov 9, 2023 |
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Apr 6, 2021 | Nov 14, 2024 |
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | Nov 13, 2025 |
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | Nov 12, 2026 |
+| [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/15/release-15.html) | Oct 20, 2022 | Nov 11, 2027 |
### Retired PostgreSQL engine versions not supported in Azure Cosmos DB for PostgreSQL
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
In addition to the built-in roles, users may also create [custom roles](../role-
> [!TIP] > Custom roles that need to access data stored within Azure Cosmos DB or use Data Explorer in the Azure portal must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action.
+> [!NOTE]
+> Custom role assignments may not always be visible in the Azure portal.
+ ## <a id="prevent-sdk-changes"></a>Preventing changes from the Azure Cosmos DB SDKs The Azure Cosmos DB resource provider can be locked down to prevent any changes to resources from a client connecting using the account keys (that is applications connecting via the Azure Cosmos DB SDK). This feature may be desirable for users who want higher degrees of control and governance for production environments. Preventing changes from the SDK also enables features such as resource locks and diagnostic logs for control plane operations. The clients connecting from Azure Cosmos DB SDK will be prevented from changing any property for the Azure Cosmos DB accounts, databases, containers, and throughput. The operations involving reading and writing data to Azure Cosmos DB containers themselves are not impacted.
cosmos-db Dotnet Standard Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/dotnet-standard-sdk.md
- Title: Azure Cosmos DB for Table .NET Standard SDK & Resources
-description: Learn all about the Azure Cosmos DB for Table and the .NET Standard SDK including release dates, retirement dates, and changes made between each version.
----- Previously updated : 11/03/2021--
-# Azure Cosmos DB Table .NET Standard API: Download and release notes
-> [!div class="op_single_selector"]
->
-> * [.NET](dotnet-sdk.md)
-> * [.NET Standard](dotnet-standard-sdk.md)
-> * [Java](java-sdk.md)
-> * [Node.js](nodejs-sdk.md)
-> * [Python](python-sdk.md)
-
-| | Links |
-|||
-|**SDK download**|[NuGet](https://www.nuget.org/packages/Azure.Data.Tables/)|
-|**Sample**|[Azure Cosmos DB for Table .NET Sample](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started)|
-|**Quickstart**|[Quickstart](quickstart-dotnet.md)|
-|**Tutorial**|[Tutorial](tutorial-develop-table-dotnet.md)|
-|**Current supported framework**|[Microsoft .NET Standard 2.0](https://www.nuget.org/packages/NETStandard.Library)|
-|**Report Issue**|[Report Issue](https://github.com/Azure/azure-cosmos-table-dotnet/issues)|
-
-## Release notes for 2.0.0 series
-2.0.0 series takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Azure Cosmos DB endpoint.
-
-### <a name="2.0.0-preview"></a>2.0.0-preview
-* initial preview of 2.0.0 Table SDK that takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Azure Cosmos DB endpoint. The public API remains the same.
-
-## Release notes for 1.0.0 series
-1.0.0 series takes the dependency on [Microsoft.Azure.DocumentDB.Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/).
-
-### <a name="1.0.8"></a>1.0.8
-* Add support to set TTL property if it's cosmosdb endpoint
-* Honor retry policy upon timeout and task cancelled exception
-* Fix intermittent task cancelled exception seen in asp .net applications
-* Fix azure table storage retrieve from secondary endpoint only location mode
-* Update `Microsoft.Azure.DocumentDB.Core` dependency version to 2.11.2 which fixes intermittent null reference exception
-* Update `Odata.Core` dependency version to 7.6.4 which fixes compatibility conflict with azure shell
-
-### <a name="1.0.7"></a>1.0.7
-* Performance improvement by setting Table SDK default trace level to SourceLevels.Off, which can be opted in via app.config
-
-### <a name="1.0.5"></a>1.0.5
-* Introduce new config under TableClientConfiguration to use Rest Executor to communicate with Azure Cosmos DB for Table
-
-### <a name="1.0.5-preview"></a>1.0.5-preview
-* Bug fixes
-
-### <a name="1.0.4"></a>1.0.4
-* Bug fixes
-* Provide HttpClientTimeout option for RestExecutorConfiguration.
-
-### <a name="1.0.4-preview"></a>1.0.4-preview
-* Bug fixes
-* Provide HttpClientTimeout option for RestExecutorConfiguration.
-
-### <a name="1.0.1"></a>1.0.1
-* Bug fixes
-
-### <a name="1.0.0"></a>1.0.0
-* General availability release
-
-### <a name="0.11.0-preview"></a>0.11.0-preview
-
-* Changes were made to how CloudTableClient can be configured. It now takes a TableClientConfiguration object during construction. TableClientConfiguration provides different properties to configure the client behavior depending on whether the target endpoint is Azure Cosmos DB for Table or Azure Storage API for Table.
-* Added support to TableQuery to return results in sorted order on a custom column. This feature is only supported on Azure Cosmos DB Table endpoints.
-* Added support to expose RequestCharges on various result types. This feature is only supported on Azure Cosmos DB Table endpoints.
-
-### <a name="0.10.1-preview"></a>0.10.1-preview
-* Add support for SAS token, operations of TablePermissions, ServiceProperties, and ServiceStats against Azure Storage Table endpoints.
- > [!NOTE]
- > Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption.
-
-### <a name="0.10.0-preview"></a>0.10.0-preview
-* Add support for core CRUD, batch, and query operations against Azure Storage Table endpoints.
- > [!NOTE]
- > Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption.
-
-### <a name="0.9.1-preview"></a>0.9.1-preview
-* Azure Cosmos DB Table .NET Standard SDK is a cross-platform .NET library that provides efficient access to the Table data model on Azure Cosmos DB. This initial release supports the full set of Table and Entity CRUD + Query functionalities with similar APIs as the [Azure Cosmos DB Table SDK For .NET Framework](dotnet-sdk.md).
- > [!NOTE]
- > Azure Storage Table endpoints are not yet supported in the 0.9.1-preview version.
-
-## Release and Retirement dates
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
-
-This cross-platform .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) will replace the .NET Framework library [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table).
-
-### 2.0.0 series
-| Version | Release Date | Retirement Date |
-| | | |
-| [2.0.0-preview](#2.0.0-preview) |Auguest 22, 2019 | |
-
-### 1.0.0 series
-| Version | Release Date | Retirement Date |
-| | | |
-| [1.0.5](#1.0.5) |September 13, 2019 | |
-| [1.0.5-preview](#1.0.5-preview) |Auguest 20, 2019 | |
-| [1.0.4](#1.0.4) |Auguest 12, 2019 | |
-| [1.0.4-preview](#1.0.4-preview) |July 26, 2019 | |
-| 1.0.2-preview |May 2, 2019 | |
-| [1.0.1](#1.0.1) |April 19, 2019 | |
-| [1.0.0](#1.0.0) |March 13, 2019 | |
-| [0.11.0-preview](#0.11.0-preview) |March 5, 2019 | |
-| [0.10.1-preview](#0.10.1-preview) |January 22, 2019 | |
-| [0.10.0-preview](#0.10.0-preview) |December 18, 2018 | |
-| [0.9.1-preview](#0.9.1-preview) |October 18, 2018 | |
--
-## FAQ
--
-## See also
-To learn more about the Azure Cosmos DB for Table, see [Introduction to Azure Cosmos DB for Table](introduction.md).
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com
Here are a few pointers to get you started: * [Build a .NET application by using the API for Table](quickstart-dotnet.md)
-* [Develop with the API for Table in .NET](tutorial-develop-table-dotnet.md)
* [Query table data by using the API for Table](tutorial-query.md) * [Learn how to set up Azure Cosmos DB global distribution by using the API for Table](tutorial-global-distribution.md)
-* [Azure Cosmos DB Table .NET Standard SDK](dotnet-standard-sdk.md)
-* [Azure Cosmos DB Table .NET SDK](dotnet-sdk.md)
-* [Azure Cosmos DB Table Java SDK](java-sdk.md)
-* [Azure Cosmos DB Table Node.js SDK](nodejs-sdk.md)
-* [Azure Cosmos DB Table SDK for Python](python-sdk.md)
+* [Azure Cosmos DB Table .NET SDK](/dotnet/api/overview/azure/data.tables-readme)
+* [Azure Cosmos DB Table Java SDK](/java/api/overview/azure/data-tables-readme)
+* [Azure Cosmos DB Table Node.js SDK](/javascript/api/overview/azure/data-tables-readme)
+* [Azure Cosmos DB Table SDK for Python](/python/api/azure-data-tables/azure.data.tables)
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query.md
The queries in this article use the following sample `People` table:
See [Querying Tables and Entities](/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the API for Table.
-For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB for Table](introduction.md) and [Develop with the API for Table in .NET](tutorial-develop-table-dotnet.md).
+For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB for Table](introduction.md) and [Develop with the API for Table in .NET](quickstart-dotnet.md).
## Prerequisites
-For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](quickstart-dotnet.md) or the [developer tutorial](tutorial-develop-table-dotnet.md) to create an account and populate your database.
+For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](quickstart-dotnet.md) to create an account and populate your database.
## Query on PartitionKey and RowKey
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Title: Buy an Azure reservation description: Learn about important points to help you buy an Azure reservation. -+ Previously updated : 09/07/2022 Last updated : 10/20/2022
Depending on how you pay for your Azure subscription, email reservation notifica
- Cancellation - Scope change
-For customers with EA subscriptions:
+Notifications are sent to the following users:
-- Notifications are sent only to the EA notification contacts.-- Users added to a reservation using Azure RBAC (IAM) permission don't receive any email notifications.
+- Customers with EA subscriptions
+ - Notifications are sent to the EA notification contacts, EA admin, reservation owners, and the reservation administrator.
+- Customers with Microsoft Customer Agreement (Azure Plan)
+ - Notifications are sent to the reservation owners and the reservation administrator.
+- Cloud Solution Provider and new commerce partners
+ - Emails are sent to the partner notification contact.
+- Individual subscription customers with pay-as-you-go rates
+ - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
-For customers with individual subscriptions:
--- The purchaser receives a purchase notification.-- At the time of purchase, the subscription billing account owner receives a purchase notification.-- The account owner receives all other notifications. ## Next steps
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
Title: Automatically renew Azure reservations description: Learn how you can automatically renew Azure reservations to continue getting reservation discounts. -+ Previously updated : 08/29/2022 Last updated : 10/20/2022
Renewal notification emails are sent 30 days before expiration and again on the
Emails are sent to different people depending on your purchase method: -- EA customers - Emails are sent to the notification contacts set on the EA portal or Enterprise Administrators who are automatically enrolled to receive usage notifications.-- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators.-- Cloud Solution Provider customers - Emails are sent to the partner notification contact. This notification isn't currently supported for Microsoft Customer Agreement subscriptions (CSP Azure Plan subscription).-
-Renewal notifications are not sent to any Microsoft Customer Agreement (Azure Plan) users.
+- Customers with EA subscriptions
+ - Notifications are sent to the EA notification contacts, EA admin, reservation owners, and the reservation administrator.
+- Customers with Microsoft Customer Agreement (Azure Plan)
+ - Notifications are sent to the reservation owners and the reservation administrator.
+- Cloud Solution Provider and new commerce partners
+ - Emails are sent to the partner notification contact.
+- Individual subscription customers with pay-as-you-go rates
+ - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
## Next steps+ - To learn more about Azure Reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md)
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 # How saving plan discount is applied
-Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan can help you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. The savings can significantly reduce your resource costs by up to 66% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
+Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan can help you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. The savings can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
Each hour with savings plan, your eligible compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage after you reach your commitment amount is priced at pay-as-you-go rates. To be eligible for a savings plan benefit, the usage must be generated by a resource within the savings plan's scope. Each hour's benefit is _use-it-or-lose-it_, and can't be rolled over to another hour.
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Previously updated : 10/12/2022 Last updated : 10/20/2022 # What are Azure savings plans for compute?
-Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 66% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
+Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
Each hour with savings plan, your compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage afterward is priced at pay-as-you-go rates. Savings plan commitments are priced in USD for Microsoft Customer Agreement and Microsoft Partner Agreement customers, and in local currency for Enterprise Agreement customers. Usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions, and Azure app services are eligible for savings plan discounts.
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-debug-mode.md
With debug on, the Data Preview tab will light-up on the bottom panel. Without d
:::image type="content" source="media/data-flow/datapreview.png" alt-text="Data preview":::
+You can sort columns in data preview and rearrange columns using drag and drop. Additionally, there is an export button on the top of the data preview panel that you can use to export the preview data to a CSV file for offline data exploration. You can use this feature to export up to 1,000 rows of preview data.
+ > [!NOTE] > File sources only limit the rows that you see, not the rows being read. For very large datasets, it is recommended that you take a small portion of that file and use it for your testing. You can select a temporary file in Debug Settings for each source that is a file dataset type.
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md
Previously updated : 09/09/2021 Last updated : 10/20/2022 # Copy data from SAP HANA using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md
Previously updated : 09/09/2021 Last updated : 10/20/2022 # Copy data from an SAP table using Azure Data Factory or Azure Synapse Analytics
data-factory Data Flow Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conversion-functions.md
Previously updated : 08/03/2022 Last updated : 10/19/2022 # Conversion functions in mapping data flow
Conversion functions are used to convert data and test for data types
| Conversion function | Task | |-|-|
+| [ascii](data-flow-expressions-usage.md#ascii) | Returns the numeric value of the input character. If the input string has more than one character, the numeric value of the first character is returned|
+| [char](data-flow-expressions-usage.md#char) | Returns the ascii character represented by the input number. If number is greater than 256, the result is equivalent to char(number % 256)|
+| [decode](data-flow-expressions-usage.md#decode) | Decodes the encoded input data into a string based on the given charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'|
+| [encode](data-flow-expressions-usage.md#encode) | Encodes the input string data into binary based on a charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'|
| [isBitSet](data-flow-expressions-usage.md#isBitSet) | Checks if a bit position is set in this bitset| | [setBitSet](data-flow-expressions-usage.md#setBitSet) | Sets bit positions in this bitset| | [isBoolean](data-flow-expressions-usage.md#isBoolean) | Checks if the string value is a boolean value according to the rules of ``toBoolean()``|
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
Previously updated : 08/03/2022 Last updated : 10/19/2022 # Data transformation expression usage in mapping data flow
Creates an array of items. All items should be of the same type. If no items are
* ``'Washington'`` ___
-<a name="assertErrorMessages" ></a>
+<a name="ascii" ></a>
-### <code>assertErrorMessages</code>
-<code><b>assertErrorMessages() => map</b></code><br/><br/>
-Returns a map of all error messages for the row with assert ID as the key.
-
-Examples
-* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
+### <code>ascii</code>
+<code><b>ascii(<i>&lt;Input&gt;</i> : string) => number</b></code><br/><br/>
+Returns the numeric value of the input character. If the input string has more than one character, the numeric value of the first character is returned
+* ``ascii('A') -> 65``
+* ``ascii('a') -> 97``
___ - <a name="asin" ></a> ### <code>asin</code>
Calculates an inverse sine value.
* ``asin(0) -> 0.0`` ___
+<a name="assertErrorMessages" ></a>
+
+### <code>assertErrorMessages</code>
+<code><b>assertErrorMessages() => map</b></code><br/><br/>
+Returns a map of all error messages for the row with assert ID as the key.
+
+Examples
+* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
+
+___
<a name="associate" ></a>
Returns the smallest integer not smaller than the number.
* ``ceil(-0.1) -> 0`` ___
+<a name="char" ></a>
+
+### <code>char</code>
+<code><b>char(<i>&lt;Input&gt;</i> : number) => string</b></code><br/><br/>
+Returns the ascii character represented by the input number. If number is greater than 256, the result is equivalent to char(number % 256)
+* ``char(65) -> 'A'``
+* ``char(97) -> 'a'``
+___
<a name="coalesce" ></a>
Duration in milliseconds for number of days.
* ``days(2) -> 172800000L`` ___
+<a name="decode" ></a>
+
+### <code>decode</code>
+<code><b>decode(<i>&lt;Input&gt;</i> : any, <i>&lt;Charset&gt;</i> : string) => binary</b></code><br/><br/>
+Decodes the encoded input data into a string based on the given charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'
+* ``decode(array(toByte(97),toByte(98),toByte(99)), 'US-ASCII') -> abc``
+___
+ <a name="degrees" ></a>
___
## E
+<a name="encode" ></a>
+
+### <code>encode</code>
+<code><b>encode(<i>&lt;Input&gt;</i> : string, <i>&lt;Charset&gt;</i> : string) => binary</b></code><br/><br/>
+Encodes the input string data into binary based on a charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'
+* ``encode('abc', 'US-ASCII') -> array(toByte(97),toByte(98),toByte(99))``
+___
+
+Input string: string, Charset: string) => binary
<a name="endsWith" ></a> ### <code>endsWith</code>
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md
For the latest information on Azure storage service limits and best practices fo
## Data copy and upload caveats -- Do not copy data directly into the disks. Copy data to pre-created *BlockBlob*,*PageBlob*, and *AzureFile* folders.
+- Do not copy data directly into the disks. Copy data to pre-created *BlockBlob*, *PageBlob*, and *AzureFile* folders.
- A folder under the *BlockBlob* and *PageBlob* is a container. For instance, containers are created as *BlockBlob/container* and *PageBlob/container*. - If a folder has the same name as an existing container, the folder's contents are merged with the container's contents. Files or blobs that aren't already in the cloud are added to the container. If a file or blob has the same name as a file or blob that's already in the container, the existing file or blob is overwritten. - Every file written into *BlockBlob* and *PageBlob* shares is uploaded as a block blob and page blob respectively. - The hierarchy of files is maintained while uploading to the cloud for both blobs and Azure Files. For example, you copied a file at this path: `<container folder>\A\B\C.txt`. This file is uploaded to the same path in cloud. - Any empty directory hierarchy (without any files) created under *BlockBlob* and *PageBlob* folders is not uploaded. - If you don't have long paths enabled on the client, and any path and file name in your data copy exceeds 256 characters, the Data Box Split Copy Tool (DataBoxDiskSplitCopy.exe) or the Data Box Disk Validation tool (DataBoxDiskValidation.cmd) will report failures. To avoid this kind of failure, [enable long paths on your Windows client](/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd#enable-long-paths-in-windows-10-version-1607-and-later).-- To improve performance during data uploads, we recommend that you [enable large file shares on the storage account and increase share capacity to 100 TiB](../../articles/storage/files/storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account). Large file shares are only supported for storage accounts with locally redundant storage (LRS).
+- To improve performance during data uploads, we recommend that you [enable large file shares on the storage account and increase share capacity to 100 TiB](../../articles/storage/files/storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account). Large file shares are only supported for storage accounts with locally redundant storage (LRS).
- If there are any errors when uploading data to Azure, an error log is created in the target storage account. The path to this error log is available in the portal when the upload is complete and you can review the log to take corrective action. Do not delete data from the source without verifying the uploaded data. - File metadata and NTFS permissions are not preserved when the data is uploaded to Azure Files. For example, the *Last modified* attribute of the files will not be kept when the data is copied. - If you specified managed disks in the order, review the following additional considerations:
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security policies in Microsoft Defender for Cloud
+ Title: Create custom Azure security policies in Microsoft Defender for Cloud
description: Azure custom policy definitions monitored by Microsoft Defender for Cloud.
Last updated 07/20/2022
zone_pivot_groups: manage-asc-initiatives
-# Create custom security initiatives and policies
+# Create custom Azure security initiatives and policies
To help secure your systems and environment, Microsoft Defender for Cloud generates security recommendations. These recommendations are based on industry best practices, which are incorporated into the generic, default security policy supplied to all customers. They can also come from Defender for Cloud's knowledge of industry and regulatory standards.
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Learn more about:
### View vulnerabilities for running images in Azure Container Registry (ACR)
-Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Defender for Cloud offers many enhanced security features that can help protect
- [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription) - [Can I enable Microsoft Defender for Servers on a subset of servers?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers) - [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)-- [My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers)
+- [My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?](#my-subscription-has-microsoft-defender-for-servers-enabled-which-machines-do-i-pay-for)
- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed) - [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice) - [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
To request your discount, [contact Defender for Cloud's support team](https://po
The discount will be effective starting from the approval date, and won't take place retroactively.
-### My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?
+### My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?
-No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in a deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table:
+When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all machines in that subscription (including machines that are part of PaaS services and reside in this subscription) are billed according to their power state as shown in the following table:
| State | Description | Instance usage billed | |--|--|--|
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
To stream alerts into **ArcSight**, **SumoLogic**, **Syslog servers**, **LogRhyt
| Tool | Hosted in Azure | Description | |:|:| :|
- | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hubs](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure-Audit/02Collect-Logs-for-Azure-Audit-from-Event-Hub). |
+ | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hubs](https://help.sumologic.com/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-logs-azure-monitor/). |
| ArcSight | No | The ArcSight Azure Event Hubs smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). | | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/). | LogRhythm | No| Instructions to set up LogRhythm to collect logs from an event hub are available [here](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/).
defender-for-cloud How To Manage Aws Assessments Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-aws-assessments-standards.md
+
+ Title: Manage AWS assessments and standards
+
+description: Learn how to create custom security assessments and standards for your AWS environment.
+ Last updated : 10/20/2022++
+# Manage AWS assessments and standards
+
+Security standards contain comprehensive sets of security recommendations to help secure your cloud environments. Security teams can use the readily available standards such as AWS CIS 1.2.0, AWS Foundational Security Best Practices, and AWS PCI DSS 3.2.1, or create custom standards, and assessments to meet specific internal requirements.
+
+There are three types of resources that are needed to create and manage custom assessments:
+
+- Assessment:
+ - assessment details such as name, description, severity, remediation logic, etc.
+ - assessment logic in KQL
+ - the standard it belongs to
+- Standard: defines a set of assessments
+- Standard assignment: defines the scope, which the standard will evaluate. For example, specific AWS account(s).
+
+You can either use the built-in regulatory compliance standards or create your own custom standards and assessments.
+
+## Assign a built-in compliance standard to your AWS account
+
+**To assign a built-in compliance standard to your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/aws-add-standard.png" alt-text="Screenshot that shows you where to navigate to in order to add a AWS standard." lightbox="media/how-to-manage-assessments-standards/aws-add-standard-zoom.png":::
+
+1. Select a built-in standard from the drop-down menu.
+
+1. Select **Save**.
+
+## Create a new custom standard for your AWS account
+
+**To create a new custom standard for your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+1. Select **New standard**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/new-aws-standard.png" alt-text="Screenshot that shows you where to select a new AWS standard." lightbox="media/how-to-manage-assessments-standards/new-aws-standard.png":::
+
+1. Enter a name, description and select which assessments you want to add.
+
+1. Select **Save**.
+
+## Assign a built-in assessment to your AWS account
+
+**To assign a built-in assessment to your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/aws-assessment.png" alt-text="Screenshot that shows where to navigate to, to select an AWS assessment." lightbox="media/how-to-manage-assessments-standards/aws-assessment.png":::
+
+1. Select **Existing assessment**.
+
+1. Select all relevant assessments from the drop-down menu.
+
+1. Select the standards from the drop-down menu.
+
+1. Select **Save**.
+
+## Create a new custom assessment for your AWS account
+
+**To create a new custom assessment for your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+1. Select **New assessment (preview)**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/new-aws-assessment.png" alt-text="Screenshot of the adding a new assessment screen for your AWS account." lightbox="media/how-to-manage-assessments-standards/new-aws-assessment.png":::
+
+1. Enter a name, severity, and select an assessment from the drop-down menu.
+
+1. Enter a KQL query that defines the assessment logic.
+
+ If youΓÇÖd like to create a new query, select the ΓÇÿ[Azure Data Explorer](https://dataexplorer.azure.com/clusters/securitydatastoreus.centralus/databases/DiscoveryMockDataAws)ΓÇÖ link. The explorer will contain mock data on all of the supported native APIs. The data will appear in the same structure as contracted in the API.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/azure-data-explorer.png" alt-text="Screenshot that shows where to select to select the Azure Data Explorer link." lightbox="media/how-to-manage-assessments-standards/azure-data-explorer.png":::
+
+ See the [how to build a query](#how-to-build-a-query) section for more examples.
+
+1. Select the standards to add to this assessment.
+
+1. Select **Save**.
+
+## How to build a query
+
+The last row of the query should return all the original columns (donΓÇÖt use ΓÇÿprojectΓÇÖ, ΓÇÿproject-away'). End the query with an iff statement that defines the healthy or unhealthy conditions: `| extend HealthStatus = iff([boolean-logic-here], 'UNHEALTHY','HEALTHY')`.
+
+### Sample KQL queries
+
+When building a KQL query, you should use the following table structure:
+
+```kusto
+- TimeStamp
+ 2021-10-07T10:30:21.403732Z
+ - SdksInfo
+ {
+ "AWSSDK.EC2": "3.7.5.2"
+ }
+
+ - RecordProviderInfo
+ {
+ "CloudName": "AWS",
+ "CspmDiscoveryCloudRoleArn": "arn:aws:iam::123456789123:role/CSPMMonitoring",
+ "Type": "MultiCloudDiscoveryServiceDataCollector",
+ "HierarchyIdentifier": "123456789123",
+ "ConnectorId": "b3113210-63f9-43c5-a6a7-f14a2a5b3cd0"
+ }
+ - RecordOrganizationInfo
+ {
+ "Type": "MyOrganization",
+ "TenantId": "bda8bc53-d9f8-4248-b9a9-3a6c7fe0b92f",
+ "SubscriptionId": "69444886-de6b-40c5-8b43-065f739fffb9",
+ "ResourceGroupName": "MyResourceGroupName"
+ }
+
+ - CorrelationId
+ 4f5e50e1d92c400caf507036a1237c72
+ - RecordRegionalInfo
+ {
+ "Type": "MultiCloudRegion",
+ "RegionUniqueName": "eu-west-2",
+ "RegionDisplayName": "EU West (London)",
+ "IsGlobalForRecord": false
+ }
+
+ - RecordIdentifierInfo
+ {
+ "Type": "MultiCloudDiscoveryServiceDataCollector",
+ "RecordNativeCloudUniqueIdentifier": "arn:aws:ec2:eu-west-2:123456789123:elastic-ip/eipalloc-1234abcd5678efef9",
+ "RecordAzureUniqueIdentifier": "/subscriptions/69444886-de6b-40c5-8b43-065f739fffb9/resourcegroups/MyResourceGroupName/providers/Microsoft.Security/securityconnectors/b3113210-63f9-43c5-a6a7-f14a2a5b3cd0/securityentitydata/aws-ec2-elastic-ip-eipalloc-1234abcd5678efef9-eu-west-2",
+ "RecordIdentifier": "eipalloc-1234abcd5678efef9-eu-west-2",
+ "ResourceProvider": "EC2",
+ "ResourceType": "elastic-ip"
+ }
+ - Record
+ {
+ "AllocationId": "eipalloc-1234abcd5678efef9",
+ "AssociationId": "eipassoc-234abcd5678efef90",
+ "CarrierIp": null,
+ "CustomerOwnedIp": null,
+ "CustomerOwnedIpv4Pool": null,
+ "Domain": {
+ "Value": "vpc"
+ },
+ "InstanceId": "i-0a8fcc00493c4625d",
+ "NetworkBorderGroup": "eu-west-2",
+ "NetworkInterfaceId": "eni-34abcd5678efef901",
+ "NetworkInterfaceOwnerId": "123456789123",
+ "PrivateIpAddress": "172.31.21.88",
+ "PublicIp": "19.218.211.431",
+ "PublicIpv4Pool": "amazon",
+ "Tags": [
+ {
+ "Value": "arn:aws:cloudformation:eu-west-2:123456789123:stack/awseb-e-sjuh4tkr7a-stack/4ff15da0-2512-11ec-ab59-023b28e97f64",
+ "Key": "aws:cloudformation:stack-id"
+ },
+ {
+ "Value": "e-sjuh4tkr7a",
+ "Key": "elasticbeanstalk:environment-id"
+ },
+ {
+ "Value": "AWSEBEIP",
+ "Key": "aws:cloudformation:logical-id"
+ },
+ {
+ "Value": "awseb-e-sjuh4tkr7a-stack",
+ "Key": "aws:cloudformation:stack-name"
+ },
+ {
+ "Value": "Mebrennetest3-env",
+ "Key": "elasticbeanstalk:environment-name"
+ },
+ {
+ "Value": "Mebrennetest3-env",
+ "Key": "Name"
+ }
+ ]
+ }
+```
+
+> [!NOTE]
+> The `Record` field contains the data structure as it is returned from the AWS API. Use this field to define conditions which will determine if the resource is healthy or unhealthy.
+>
+> You can access internal properties of `Record` filed using a dot notation. For example: `| extend EncryptionType = Record.Encryption.Type`.
+
+**Stopped EC2 instances should be removed after a specified time period**
+
+```kusto
+EC2_Instance
+| extend State = tolower(tostring(Record.State.Name.Value))
+| extend StoppedTime = todatetime(tostring(Record.StateTransitionReason))
+| extend HealthStatus = iff(not(State == 'stopped' and StoppedTime < ago(30d)), 'HEALTHY', 'UNHEALTHY')
+```
+
+**EC2 subnets should not automatically assign public IP addresses**
+
+
+```kusto
+EC2_Subnet
+| extend MapPublicIpOnLaunch = tolower(tostring(Record.MapPublicIpOnLaunch))
+| extend HealthStatus = iff(MapPublicIpOnLaunch == 'false' ,'HEALTHY', 'UNHEALTHY')
+```
+
+**EC2 instances should not use multiple ENIs**
+
+```kusto
+EC2_Instance
+| extend NetworkInterfaces = parse_json(Record)['NetworkInterfaces']
+| extend NetworkInterfaceCount = array_length(parse_json(NetworkInterfaces))
+| extend HealthStatus = iff(NetworkInterfaceCount == 1 ,'HEALTHY', 'UNHEALTHY')
+```
+
+You can use the following links to learn more about Kusto queries:
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+- [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/)
+- [Must Learn KQL](https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/)
+
+## Next steps
+
+In this article, you learned how to manage your assessments and standards in Defender for Cloud.
+
+> [!div class="nextstepaction"]
+> [Find recommendations that can improve your security posture](review-security-recommendations.md)
defender-for-cloud How To Manage Gcp Assessments Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-gcp-assessments-standards.md
+
+ Title: Manage GCP assessments and standards
+
+description: Learn how to create custom security assessments and standards for your GCP environment.
+ Last updated : 10/18/2022++
+# Manage GCP assessments and standards
+
+Security standards contain comprehensive sets of security recommendations to help secure your cloud environments. Security teams can use the readily available regulatory standards such as GCP CIS 1.1.0, GCP CIS and 1.2.0, or create custom standards and assessments to meet specific internal requirements.
+
+There are three types of resources that are needed to create and manage custom assessments:
+
+- Assessment:
+ - assessment details such as name, description, severity, remediation logic, etc.
+ - assessment logic in KQL
+ - the standard it belongs to
+- Standard: defines a set of assessments
+- Standard assignment: defines the scope, which the standard will evaluate. For example, specific GCP projects.
+
+You can either use the built-in compliance standards or create your own custom standards and assessments.
+
+## Assign a built-in compliance standard to your GCP project
+
+**To assign a built-in compliance standard to your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/gcp-standard.png" alt-text="Screenshot that shows you where to navigate to, to add a GCP standard." lightbox="media/how-to-manage-assessments-standards/gcp-standard-zoom.png":::
+
+1. Select a built-in standard from the drop-down menu.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/drop-down-menu.png" alt-text="Screenshot that shows you the standard options you can choose from the drop-down menu." lightbox="media/how-to-manage-assessments-standards/drop-down-menu.png":::
+
+1. Select **Save**.
+
+## Create a new custom standard for your GCP project
+
+**To create a new custom standard for your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+1. Select **New standard**.
+
+1. Enter a name, description and select which assessments you want to add.
+
+1. Select **Save**.
+
+## Assign a built-in assessment to your GCP project
+
+**To assign a built-in assessment to your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/gcp-assessment.png" alt-text="Screenshot that shows where to navigate to, to select GCP assessment." lightbox="media/how-to-manage-assessments-standards/gcp-assessment.png":::
+
+1. Select **Existing assessment**.
+
+1. Select all relevant assessments from the drop-down menu.
+
+1. Select the standards from the drop-down menu.
+
+1. Select **Save**.
+
+## Create a new custom assessment for your GCP project
+
+**To create a new custom assessment to your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+1. Select **New assessment (preview)**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/new-assessment.png" alt-text="Screenshot of the new assessment screen for a GCP project." lightbox="media/how-to-manage-assessments-standards/new-assessment.png":::
+
+1. In the general section, enter a name and severity.
+
+1. In the query section, select an assessment template from the drop-down menu, or use the following query schema:
+
+ For example:
+
+ **Ensure that Cloud Storage buckets have uniform bucket-level access enabled**
+
+ ```kusto
+ let UnhealthyBuckets = Storage_Bucket
+ extend RetentionPolicy = Record.retentionPolicy
+ where isnull(RetentionPolicy) or isnull(RetentionPolicy.isLocked) or tobool(RetentionPolicy.isLocked)==false
+ project BucketName = RecordIdentifierInfo.CloudNativeResourceName; Logging_LogSink
+ extend Destination = split(Record.destination,'/')[0]
+ where Destination == 'storage.googleapis.com'
+ extend LogBucketName = split(Record.destination,'/')[1]
+ extend HealthStatus = iff(LogBucketName in(UnhealthyBuckets), 'UNHEALTHY', 'HEALTHY')"
+ ```
+
+ See the [how to build a query](#how-to-build-a-query) section for more examples.
+
+ 1. Select **Save**.
+
+## How to build a query
+
+The last row of the query should return all the original columns (donΓÇÖt use ΓÇÿprojectΓÇÖ, ΓÇÿproject-away). End the query with an iff statement that defines the healthy or unhealthy conditions: `| extend HealthStatus = iff([boolean-logic-here], 'UNHEALTHY','HEALTHY')`.
+
+### Sample KQL queries
+
+**Ensure that Cloud Storage buckets have uniform bucket-level access enabled**
+
+```kusto
+let UnhealthyBuckets = Storage_Bucket
+| extend RetentionPolicy = Record.retentionPolicy
+| where isnull(RetentionPolicy) or isnull(RetentionPolicy.isLocked) or tobool(RetentionPolicy.isLocked)==false
+| project BucketName = RecordIdentifierInfo.CloudNativeResourceName; Logging_LogSink
+| extend Destination = split(Record.destination,'/')[0]
+| where Destination == 'storage.googleapis.com'
+| extend LogBucketName = split(Record.destination,'/')[1]
+| extend HealthStatus = iff(LogBucketName in(UnhealthyBuckets), 'UNHEALTHY', 'HEALTHY')"
+```
+
+**Ensure VM disks for critical VMs are encrypted**
+
+```kusto
+Compute_Disk
+| extend DiskEncryptionKey = Record.diskEncryptionKey
+| extend IsVmNotEncrypted = isempty(tostring(DiskEncryptionKey.sha256))
+| extend HealthStatus = iff(IsVmNotEncrypted ,'UNHEALTHY' ,'HEALTHY')"
+```
+
+**Ensure Compute instances are launched with Shielded VM enabled**
+
+```kusto
+Compute_Instance
+| extend InstanceName = tostring(Record.id)
+| extend ShieldedVmExist = tostring(Record.shieldedInstanceConfig.enableIntegrityMonitoring) =~ 'true' and tostring(Record.shieldedInstanceConfig.enableVtpm) =~ 'true'
+| extend HealthStatus = iff(ShieldedVmExist, 'HEALTHY', 'UNHEALTHY')"
+```
+
+You can use the following links to learn more about Kusto queries:
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+- [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/)
+- [Must Learn KQL](https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/)
+
+## Next steps
+
+In this article, you learned how to manage your assessments and standards in Defender for Cloud.
+
+> [!div class="nextstepaction"]
+> [Find recommendations that can improve your security posture](review-security-recommendations.md)
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
# Discover misconfigurations in Infrastructure as Code (IaC)
-Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, extra support is located in the YAML configuration that can be used to run a specific tool, or several of the tools. For example, setting up the action or extension to run Infrastructure as Code (IaC) scanning only. This can help reduce pipeline run time.
+Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, you can configure the YAML configuration file to run a single tool or multiple tools. For example, you can set up the action or extension to run Infrastructure as Code (IaC) scanning tools only. This can help reduce pipeline run time.
## Prerequisites -- [Configure Microsoft Security DevOps GitHub action](github-action.md).-- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- Configure Microsoft Security DevOps for GitHub and/or Azure DevOps based on your source code management system:
+ - [Microsoft Security DevOps GitHub action](github-action.md)
+ - [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- Ensure you have an IaC template in your repository.
-## View the results of the IaC scan in GitHub
+## Configure IaC scanning and view the results in GitHub
1. Sign in to [GitHub](https://www.github.com).
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
:::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select commit change on the githib page.":::
-1. (Optional) Skip this step if you already have an IaC template in your repository.
+1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
- Follow this link to [Install an IaC template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux).
+ For example, [commit an IaC template to deploy a basic Linux web application](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux) to your repository.
1. Select `azuredeploy.json`.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the deploy.json file is located.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the azuredeploy.json file is located.":::
1. Select **Raw** 1. Copy all the information in the file.
- ```Bash
+ ```json
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
:::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created has been added to your repository.":::
-1. Select **Actions**.
-1. Select the workflow to see the results.
+1. Confirm the Microsoft Security DevOps scan completed:
+ 1. Select **Actions**.
+ 2. Select the workflow to see the results.
-1. Navigate in the results to the scan results section.
+1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan (filter by tool as needed to see just the IaC findings).
-1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan.
-
-## View the results of the IaC scan in Azure DevOps
+## Configure IaC scanning and view the results in Azure DevOps
**To view the results of the IaC scan in Azure DevOps**
-1. Sign in to [Azure DevOps](https://dev.azure.com/)
+1. Sign in to [Azure DevOps](https://dev.azure.com/).
+
+1. Select the desired project
-1. Navigate to **Pipeline**.
+1. Select **Pipeline**.
-1. Locate the pipeline with MSDO Azure DevOps Extension is configured.
+1. Select the pipeline where the Microsoft Security DevOps Azure DevOps Extension is configured.
-1. Select **Edit**.
+1. **Edit** the pipeline configuration YAML file adding the following lines:
1. Add the following lines to the YAML file
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
1. Select **Save**.
-1. Select **Save** to commit directly to the main branch or Create a new branch for this commit
+1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
+
+1. Select **Save** to commit directly to the main branch or Create a new branch for this commit.
1. Select **Pipeline** > **`Your created pipeline`** to view the results of the IaC scan. 1. Select any result to see the details.
-## Remediate PowerShell based rules:
+## View details and remediation information on IaC rules included with Microsoft Security DevOps
+
+### PowerShell-based rules
Information about the PowerShell-based rules included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will only evaluate the rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security) unless the option `--include-non-security-rules` is used. > [!NOTE]
-> Severity levels are scaled from 1 to 3. Where 1 = High, 2 = Medium, 3 = Low.
+> PowerShell-based rules are included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will evaluate all rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security).
### JSON-Based Rules:
+JSON-based rules for ARM templates and bicep files are provided by [Template-Analyzer](https://github.com/Azure/template-analyzer#template-best-practice-analyzer-bpa). Below are details on template-analyzer's rules and remediation details.
+
+> [!NOTE]
+> Severity levels are scaled from 1 to 3. Where 1 = High, 2 = Medium, 3 = Low.
+ #### TA-000001: Diagnostic logs in App Services should be enabled Audits the enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised.
Audits the enabling of diagnostic logs on the app. This enables you to recreate
#### TA-000002: Remote debugging should be turned off for API Apps
-Remote debugging requires inbound ports to be opened on an API app. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+Remote debugging requires inbound ports to be opened on an API app. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
Remote debugging requires inbound ports to be opened on an API app. These ports
Enable FTPS enforcement for enhanced security.
-**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
+**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps) in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
**Severity level**: 1
Enable FTPS enforcement for enhanced security.
API apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
-**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
+**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https) in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
**Severity level**: 2
API apps should require HTTPS to ensure connections are made to the expected ser
API apps should require the latest TLS version.
-**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
+**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions) in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
**Severity level**: 1
For enhanced authentication security, use a managed identity. On Azure, managed
#### TA-000008: Remote debugging should be turned off for Function Apps
-Remote debugging requires inbound ports to be opened on a function app. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+Remote debugging requires inbound ports to be opened on a function app. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
For enhanced authentication security, use a managed identity. On Azure, managed
#### TA-000014: Remote debugging should be turned off for Web Applications
-Remote debugging requires inbound ports to be opened on a web application. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+Remote debugging requires inbound ports to be opened on a web application. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
Set the data retention for your SQL Server's auditing to storage account destina
#### TA-000029: Azure API Management APIs should use encrypted protocols only
-Set the protocols property to only include HTTPs.
+Set the protocols property to only include HTTPS.
**Recommendation**: To use encrypted protocols only, add (or update) the *protocols* property in the [Microsoft.ApiManagement/service/apis resource properties](/azure/templates/microsoft.apimanagement/service/apis?tabs=json), to only include HTTPS. Allowing any additional protocols (for example, HTTP, WS) is insecure.
Set the protocols property to only include HTTPs.
- Learn more about the [Template Best Practice Analyzer](https://github.com/Azure/template-analyzer).
-In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for only Infrastructure as Code misconfigurations.
+In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for Infrastructure as Code (IaC) security misconfigurations and how to view the results.
## Next steps
Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
-Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
description: This article explains how to respond to recommendations in Microsof
Previously updated : 11/09/2021 Last updated : 10/20/2022 # Implement security recommendations in Microsoft Defender for Cloud
To simplify remediation and improve your environment's security (and increase yo
**Fix** helps you quickly remediate a recommendation on multiple resources.
-> [!TIP]
-> The **Fix** feature is only available for specific recommendations. To find recommendations that have an available fix, use the **Response actions** filter for the list of recommendations:
->
-> :::image type="content" source="media/implement-security-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the Fix option.":::
- To implement a **Fix**: 1. From the list of recommendations that have the **Fix** action icon :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation. :::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
-1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Remediate**.
+1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Fix**.
> [!NOTE] > Some of the listed resources might be disabled, because you don't have the appropriate permissions to modify them.
To implement a **Fix**:
![Quick fix.](./media/implement-security-recommendations/microsoft-defender-for-cloud-quick-fix-view.png) > [!NOTE]
- > The implications are listed in the grey box in the **Remediate resources** window that opens after clicking **Remediate**. They list what changes happen when proceeding with the **Fix**.
+ > The implications are listed in the grey box in the **Fixing resources** window that opens after clicking **Fix**. They list what changes happen when proceeding with the **Fix**.
1. Insert the relevant parameters if necessary, and approve the remediation.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Defender for DevOps allows you to gain visibility into and manage your connected
Security teams can configure pull request annotations to help developers address secret scanning findings in Azure DevOps directly on their pull requests.
-You can configure the Microsoft Security DevOps tools on Azure DevOps pipelines and GitHub workflows to enable the following security scans:
+You can configure the Microsoft Security DevOps tools on Azure Pipelines and GitHub workflows to enable the following security scans:
| Name | Language | License | |--|--|--| | [Bandit](https://github.com/PyCQA/bandit) | python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) | | [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
-| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (aka CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
+| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
| [Template Analyze](https://github.com/Azure/template-analyzer)r | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | Container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
The new release contains the following capabilities:
> When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy. |Recommendation| Assessment key|
- |-|-|
- |MFA should be enabled on accounts with owner permissions on your subscription|94290b00-4d0c-d7b4-7cea-064a9554e681|
- |MFA should be enabled on accounts with read permissions on your subscription|151e82c5-5341-a74b-1eb0-bc38d2c84bb5|
- |MFA should be enabled on accounts with write permissions on your subscription|57e98606-6b1e-6193-0e3d-fe621387c16b|
- |External accounts with owner permissions should be removed from your subscription|c3b6ae71-f1f0-31b4-e6c1-d5951285d03d|
- |External accounts with read permissions should be removed from your subscription|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b|
- |External accounts with write permissions should be removed from your subscription|04e7147b-0deb-9796-2e5c-0336343ceb3d|
+ |--|--|
+ |Accounts with owner permissions on Azure resources should be MFA enabled|6240402e-f77c-46fa-9060-a7ce53997754|
+ |Accounts with write permissions on Azure resources should be MFA enabled|c0cb17b2-0607-48a7-b0e0-903ed22de39b|
+ |Accounts with read permissions on Azure resources should be MFA enabled|dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c|
+ |Guest accounts with owner permissions on Azure resources should be removed|20606e75-05c4-48c0-9d97-add6daa2109a|
+ |Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb|
+ |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095|
+ |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
+ |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 ||
The recommendations although in preview, will appear next to the recommendations that are currently in GA.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 09/20/2022 Last updated : 10/20/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | October 2022 |
-
-### Multiple changes to identity recommendations
-
-**Estimated date for change:** October 2022
-
-Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In October, we'll be making the changes outlined below.
-
-#### New recommendations in preview
-
-The new release will bring the following capabilities:
--- **Extended evaluation scope** ΓÇô Improved coverage to identity accounts without MFA and external accounts on Azure resources (instead of subscriptions only) allowing security admins to view role assignments per account.--- **Improved freshness interval** - Currently, the identity recommendations have a freshness interval of 24 hours. This update will reduce that interval to 12 hours.--- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).-
- This update will allow you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
-
- Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to but which don't have MFA enabled.
-
- > [!TIP]
- > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
-
- |Recommendation| Assessment key|
- |--|--|
- |Accounts with owner permissions on Azure resources should be MFA enabled|6240402e-f77c-46fa-9060-a7ce53997754|
- |Accounts with write permissions on Azure resources should be MFA enabled|c0cb17b2-0607-48a7-b0e0-903ed22de39b|
- |Accounts with read permissions on Azure resources should be MFA enabled|dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c|
- |Guest accounts with owner permissions on Azure resources should be removed|20606e75-05c4-48c0-9d97-add6daa2109a|
- |Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb|
- |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095|
- |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
- |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+| None | None |
## Next steps
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Date and time | Date and time that the syslog server machine received the information. | | Priority | User. Alert | | Hostname | Sensor IP address |
-| Protocol | TCP or UDP |
-| Message | Sensor: The sensor name.<br /> Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. |
+| Message | CyberX platform name: The sensor name.<br /> Microsoft Defender for IoT Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Protocol (Optional): The detected source protocol.<br /> Address (Optional): Source protocol address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Protocol (Optional): The detected destination protocol.<br /> Address (Optional): The destination protocol address.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. |<br /> UUID (Optional): The UUID the alert. |
| Syslog object output | Description | |--|--|
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Delete all sensors that are associated with the subscription prior to removing t
## Move existing sensors to a different subscription
-Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan and register the sensors under the new subscription, and then remove them from the old subscription. This process may include some downtime, and historic data isn't migrated.
+Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan to the new subscription, register the sensors under the new subscription, and then remove them from the previous subscription.
+
+Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill. Devices will be synchronized from the sensor to the new subscription automatically. Manual edits made in the portal will not be migrated. New alerts created by the sensor will be created under the new subscription, and existing alerts in the old subscription can be closed in bulk.
**To switch to a new subscription**:
-1. Onboard a new plan to the new subscription you want to use. For more information, see:
+**For OT sensors**:
+
+1. In the Azure portal, [onboard a new plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) to the new subscription you want to use.
+
+1. Create a new activation file by [following the steps to onboard an OT sensor](onboard-sensors.md#onboard-ot-sensors).
+ - Replicate site and sensor hierarchy as is.
+ - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device.
+
+1. [Upload a new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files) for your sensors under the new subscription.
+
+1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
- [Onboard a plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) in the Azure portal
+1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the previous subscription.
- [Onboard a plan for Enterprise IoT networks](#onboard-a-defender-for-iot-plan-for-enterprise-iot-networks) in Defender for Endpoint
+**For Enterprise IoT sensors**:
-1. Onboard your sensors again under the new subscription. For OT sensors, [upload a new activation](how-to-manage-individual-sensors.md#upload-new-activation-files) file for your sensors.
+1. In Defender for Endpoint, [onboard a new plan for Enterprise IoT networks](#onboard-a-defender-for-iot-plan-for-enterprise-iot-networks) to the new subscription you want to use.
-1. Delete the sensor identities from the legacy subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+1. In the Azure portal, [follow the steps to register an Enterprise IoT sensor](tutorial-getting-started-eiot-sensor.md#register-an-enterprise-iot-sensor) under the new subscription.
-1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the legacy subscription.
+1. Log into your sensor and run the activation command you saved when registering the sensor under the new subscription.
+
+1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+
+1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the previous subscription.
+
+> [!NOTE]
+> If the previous subscription was connected to Microsoft Sentinel, you will need to connect the new subscription to Microsoft Sentinel and remove the old subscription. For more information, see [Connect Microsoft Defender for IoT with Microsoft Sentinel](/azure/sentinel/iot-solution).
## Next steps
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Purpose | Source | Destination | |--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`|
-| HTTPS | TCP | Out | 443 | Remote sensor updates from the Azure portal | Sensor| `download.microsoft.com`|
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.<br><br>**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`<br> `download.microsoft.com`|
+ ### Sensor access to the on-premises management console
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-active-directory.md
Following are the prerequisites to authenticate to Event Grid.
- Install the SDK on your application. - [Java](/java/api/overview/azure/messaging-eventgrid-readme#include-the-package)
- - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme-pre#install-the-package)
+ - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme#install-the-package)
- [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid#install-the-azureeventgrid-package) - [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid#install-the-package) - Install the Azure Identity client library. The Event Grid SDK depends on the Azure Identity client library for authentication.
event-hubs Compare Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/compare-tiers.md
Title: Compare Azure Event Hubs tiers description: This article compares supported tiers of Azure Event Hubs. Previously updated : 07/20/2021 Last updated : 10/19/2022 # Compare Azure Event Hubs tiers
frontdoor How To Configure Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-rule-set.md
This article shows how to create a Rule Set and your first set of rules using th
> [!NOTE] > * To delete a condition or action from a rule, use the trash can on the right-hand side of the specific condition or action. > * To create a rule that applies to all incoming traffic, do not specify any conditions.
- > * To stop evaluating remaining rules if a specific rule is met, check **Stop evaluating remaining rule**. If this option is checked and all remaining rules in the Rule Set will not be executed regardless if the matching conditions were met.
+ > * To stop evaluating remaining rules if a specific rule is met, check **Stop evaluating remaining rule**. If this option is checked then all remaining rules in the Rule Set will not be executed regardless if the matching conditions were met.
> * All paths in Rules Engine are case sensitive. > * Header names should adhere to [RFC 7230](https://datatracker.ietf.org/doc/html/rfc7230#section-3.2.6).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 09/23/2022 Last updated : 10/20/2022
This effect is useful for testing situations or for when the policy definition h
effect. This flexibility makes it possible to disable a single assignment instead of disabling all of that policy's assignments.
-An alternative to the Disabled effect is **enforcementMode**, which is set on the policy assignment.
-When **enforcementMode** is _Disabled_, resources are still evaluated. Logging, such as Activity
+> [!NOTE]
+> Policy definitions that use the **Disabled** effect have the default compliance state **Compliant** after assignment.
+
+An alternative to the **Disabled** effect is **enforcementMode**, which is set on the policy assignment.
+When **enforcementMode** is **Disabled**_**, resources are still evaluated. Logging, such as Activity
logs, and the policy effect don't occur. For more information, see [policy assignment - enforcement mode](./assignment-structure.md#enforcement-mode).
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Hive metastore operation takes much time and thus slow down Hive compilation. In
## Troubleshooting guide
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](/azure/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
## References
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/
## Further reading
-* [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
+* [HDInsight 4.0 Announcement](/azure/hdinsight/hdinsight-version-release)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
keywords: hadoop getting started,hadoop linux,hadoop quickstart,hive getting sta
Previously updated : 09/15/2022 Last updated : 10/20/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Azure portal and run a Hive job
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
1. From the **Review + create** tab, verify the values you selected in the earlier steps.
- :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-review-create-hadoop.png" alt-text="HDInsight Linux get started cluster summary" border="true":::
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-review-create-hadoop.png" alt-text="Screenshot showing HDInsight Linux get started cluster summary." border="true":::
1. Select **Create**. It takes about 20 minutes to create a cluster. Once the cluster is created, you see the cluster overview page in the Azure portal.
- :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/cluster-settings-overview.png" alt-text="HDInsight Linux get started cluster settings" border="true":::
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/cluster-settings-overview.png" alt-text="Screenshot showing HDInsight Linux get started cluster settings" border="true.":::
## Run Apache Hive queries
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
1. To open Ambari, from the previous screenshot, select **Cluster Dashboard**. You can also browse to `https://ClusterName.azurehdinsight.net` where `ClusterName` is the cluster you created in the previous section.
- :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png" alt-text="HDInsight Linux get started cluster dashboard" border="true":::
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png" alt-text="Screenshot showing HDInsight Linux get started cluster dashboard." border="true":::
2. Enter the Hadoop username and password that you specified while creating the cluster. The default username is **admin**.
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
HDInsight have two options to configure the databases in the clusters.
During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
-For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](./hdinsight-custom-ambari-db.md)
+For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](/azure/hdinsight/hdinsight-custom-ambari-db)
## Keep your clusters up to date
As part of the best practices, we recommend you keep your clusters updated on re
HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
-For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md)
+For more information, see how to [Migrate HDInsight cluster to a newer version](/azure/hdinsight/hdinsight-upgrade-cluster)
## Next steps * [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md) * [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
-* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
Previously updated : 07/18/2022 Last updated : 10/20/2022 # Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
Migration of Hive tables to a new Storage Account needs to be done as a separate
This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) from HDInsight 4.0 to upgrade the metastore schema. > [!WARNING]
-> This step is not reversible. Run this only on a copy of the metastore.
+> This step isn't reversible. Run this only on a copy of the metastore.
1. Create a temporary HDInsight 4.0 cluster to access the 4.0 Hive `schematool`. You can use the [default Hive metastore](../hdinsight-use-external-metadata-stores.md#default-metastore) for this step.
This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/disp
> [!NOTE] > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`. >
- > SQL Syntax in these scripts is not necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
+ > SQL Syntax in these scripts isn't necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
> > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
Create a new HDInsight 4.0 cluster, [selecting the upgraded Hive metastore](../h
* The new cluster doesn't require having the same default filesystem.
-* If the metastore contains tables residing in multiple Storage Accounts, you need to add those Storage Accounts to the new cluster to access those tables. See [add additional Storage Accounts to HDInsight](../hdinsight-hadoop-add-storage.md).
+* If the metastore contains tables residing in multiple Storage Accounts, you need to add those Storage Accounts to the new cluster to access those tables. See [add extra Storage Accounts to HDInsight](../hdinsight-hadoop-add-storage.md).
* If Hive jobs fail due to storage inaccessibility, verify that the table location is in a Storage Account added to the cluster.
sudo su - hive
STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }') /usr/hdp/$STACK_VERSION/hive/bin/hive --config /etc/hive/conf --service strictmanagedmigration --hiveconf hive.strict.managed.tables=true -m automatic --modifyManagedTables ```
+### 6. Class not found error with `MultiDelimitSerDe`
+
+**Problem**
+
+In certain situations when running a Hive query, you might receive `java.lang.ClassNotFoundException` stating `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` class isn't found. This error occurs when customer migrates from HDInsight 3.6 to HDInsight 4.0. The SerDe class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe`, which is a part of `hive-contrib-1.2.1000.2.6.5.3033-1.jar` in HDInsight 3.6 is removed and we're using `org.apache.hadoop.hive.serde2.MultiDelimitSerDe` class, which is a part of `hive-exec jar` in HDI-4.0. `hive-exec jar` will load to HS2 by default when we start the service.
+
+**STEPS TO TROUBLESHOOT**
+
+1. Check if any JAR under a folder (likely that it supposed to be under Hive libraries folder, which is `/usr/hdp/current/hive/lib` in HDInsight) contains this class or not.
+1. Check for the class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` and `org.apache.hadoop.hive.serde2.MultiDelimitSerDe` as mentioned in the solution.
+
+**Solution**
+
+1. Although a JAR file is a binary file, you can still use `grep` command with `-Hrni` switches as below to search for a particular class name
+ ```
+ grep -Hrni "org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe" /usr/hdp/current/hive/lib
+ ```
+1. If it couldn't find the class, it will return no output. If it finds the class in a JAR file, it will return the output
+
+1. Below is the example took from HDInsight 4.x cluster
+
+ ```
+ sshuser@hn0-alters:~$ grep -Hrni "org.apache.hadoop.hive.serde2.MultiDelimitSerDe" /usr/hdp/4.1.9.7/hive/lib/
+ Binary file /usr/hdp/4.1.9.7/hive/lib/hive-exec-3.1.0.4.1-SNAPSHOT.jar matches
+ ```
+1. From the above output, we can confirm that no jar contains the class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` and hive-exec jar contains `org.apache.hadoop.hive.serde2.MultiDelimitSerDe`.
+1. Try to create the table with row format DerDe as `ROW FORMAT SERDE org.apache.hadoop.hive.serde2.MultiDelimitSerDe`
+1. This command will fix the issue. If you've already created the table, you can rename it using the below commands
+ ```
+ Hive => ALTER TABLE TABLE_NAME SET SERDE 'org.apache.hadoop.hive.serde2.MultiDelimitSerDe'
+ Backend DB => UPDATE SERDES SET SLIB='org.apache.hadoop.hive.serde2.MultiDelimitSerDe' where SLIB='org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe';
+ ```
+The update command is to update the details manually in the backend DB and the alter command is used to alter the table with the new SerDe class from beeline or Hive.
## Secure Hive across HDInsight versions HDInsight optionally integrates with Azure Active Directory using HDInsight Enterprise Security Package (ESP). ESP uses Kerberos and Apache Ranger to manage the permissions of specific resources within the cluster. Ranger policies deployed against Hive in HDInsight 3.6 can be migrated to HDInsight 4.0 with the following steps: 1. Navigate to the Ranger Service Manager panel in your HDInsight 3.6 cluster.
-2. Navigate to the policy named **HIVE** and export the policy to a json file.
-3. Make sure that all users referred to in the exported policy json exist in the new cluster. If a user is referred to in the policy json but doesn't exist in the new cluster, either add the user to the new cluster or remove the reference from the policy.
-4. Navigate to the **Ranger Service Manager** panel in your HDInsight 4.0 cluster.
-5. Navigate to the policy named **HIVE** and import the ranger policy json from step 2.
+1. Navigate to the policy named **HIVE** and export the policy to a json file.
+1. Make sure that all users referred to in the exported policy json exist in the new cluster. If a user is referred to in the policy json but doesn't exist in the new cluster, either add the user to the new cluster or remove the reference from the policy.
+1. Navigate to the **Ranger Service Manager** panel in your HDInsight 4.0 cluster.
+1. Navigate to the policy named **HIVE** and import the ranger policy json from step 2.
## Hive changes in HDInsight 4.0 that may require application changes
-* See [Additional configuration using Hive Warehouse Connector](./apache-hive-warehouse-connector.md) for sharing the metastore between Spark and Hive for ACID tables.
+* See [Extra configuration using Hive Warehouse Connector](./apache-hive-warehouse-connector.md) for sharing the metastore between Spark and Hive for ACID tables.
* HDInsight 4.0 uses [Storage Based Authorization](https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server). If you modify file permissions or create folders as a different user than Hive, you'll likely hit Hive errors based on storage permissions. To fix, grant `rw-` access to the user. See [HDFS Permissions Guide](https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html).
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Currently, the allowed actions for a given role are applied *globally* on the AP
## Service limits
-* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You'll need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
+* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 100,000 RUs in the portal for Azure API for FHIR. You'll need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 100,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
* **Bundle size** - Each bundle is limited to 500 items.
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: Choosing a method of deployment for MedTech service in Azure - Azure Health Data Services
+ Title: Choosing a method of deployment for the MedTech service in Azure - Azure Health Data Services
description: In this article, you'll learn how to choose a method to deploy the MedTech service in Azure. Previously updated : 10/10/2022 Last updated : 10/20/2022
The different deployment methods are:
## Azure ARM Quickstart template with Deploy to Azure button
-Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR&#174;) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but does not allow for much customization.
+Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR&#174;) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but doesn't allow for much customization.
-For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md).
+For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a Quickstart template](deploy-02-new-button.md).
## Azure PowerShell and Azure CLI automation Azure provides Azure PowerShell and Azure CLI to speed up your configurations when used in enterprise environments. Deploying MedTech service with Azure PowerShell or Azure CLI can be useful for adding automation so that you can scale your deployment for a large number of developers. This method is more detailed but provides extra speed and efficiency because it allows you to automate your deployment.
-For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/azure/healthcare-apis/iot/deploy-08-new-ps-cli).
+For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](deploy-08-new-ps-cli.md).
## Manual deployment
-The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you will be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment very precisely.
+The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you'll be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment precisely.
-For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/azure/healthcare-apis/iot/deploy-03-new-manual).
+For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md).
## Deployment architecture overview
The following data-flow diagram outlines the basic steps of MedTech service depl
:::image type="content" source="media/iot-get-started/get-started-with-iot.png" alt-text="Diagram showing MedTech service architecture overview." lightbox="media/iot-get-started/get-started-with-iot.png":::
-There are six different steps of the MedTech service PaaS. Only the first four apply to deployment. All the methods of deployment will implement each of these four steps. However, the QuickStart template method will automatically implement part of step 1 and all of step 2. The other two methods will have to implement all of the steps individually. Here is a summary of each of the four deployment steps:
+There are six different steps of the MedTech service PaaS. Only the first four apply to deployment. All the methods of deployment will implement each of these four steps. However, the QuickStart template method will automatically implement part of step 1 and all of step 2. The other two methods will have to implement all of the steps individually. Here's a summary of each of the four deployment steps:
### Step 1: Prerequisites - Have an Azure subscription-- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the QuickStart template method with the Deploy to Azure button, but it is not included in manual or PowerShell/CLI method and need to be implemented individually.
+- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the QuickStart template method with the Deploy to Azure button, but it isn't included in manual or PowerShell/CLI method and need to be implemented individually.
### Step 2: Provision
-The QuickStart template method with the Deploy to Azure button automatically provides all these steps, but they are not included in the manual or the PowerShell/CLI method and must be completed individually.
+The QuickStart template method with the Deploy to Azure button automatically provides all these steps, but they aren't included in the manual or the PowerShell/CLI method and must be completed individually.
- Create a resource group and workspace for Event Hubs, FHIR, and MedTech services. - Provision an Event Hubs instance to a namespace.
For information about granting access to the FHIR service, see [Granting access
In this article, you learned about the different types of deployment for MedTech service. To learn more about MedTech service, see
->[!div class="nextstepaction"]
->[What is MedTech service?](/rest/api/healthcareapis/iot-connectors).
+> [!div class="nextstepaction"]
+> [What is MedTech service?](iot-connector-overview.md).
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service in Azure Health Data Services description: This document describes how to get you started with the MedTech service in Azure Health Data Services.-+ Previously updated : 08/30/2022- Last updated : 10/19/2022+
This article will show you how to get started with the Azure MedTech service in
The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a medical device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key development stages: deployment, post-deployment, and data processing.
-[![Diagram showing MedTech service architectural overview.](media/iot-get-started/get-started-with-iot.png)](media/iot-get-started/get-started-with-iot.png#lightbox)
### Deployment
In order to begin deployment, you need to determine if you have: an Azure subscr
- If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- You must have the appropriate RBAC roles for the subscription resources you want to use. The roles required for a user to complete the provisioning would be Contributor AND User Access Administrator OR Owner. The Contributor role allows the user to provision resources, and the User Access Administrator role allows the user to grant access so resources can send data between them. The Owner role can perform both. For more information, see [Azure role-based access control](/azure/cloud-adoption-framework/ready/considerations/roles).
+- You must have the appropriate RBAC roles for the subscription resources you want to use. The roles required for a user to complete the provisioning would be Contributor AND User Access Administrator OR Owner. The Contributor role allows the user to provision resources, and the User Access Administrator role allows the user to grant access so resources can send data between them. The Owner role can perform both. For more information, see [Azure role-based access control (RBAC)](/azure/cloud-adoption-framework/ready/considerations/roles).
## Step 2: Provision services for deployment
-After obtaining the required prerequisites, the next phase of deployment is to create a workspace and provision instances of the Event Hubs service, FHIR service, and MedTech service. You must also give the Event Hubs permission to read data from your device and give the MedTech service permission to read and write to the FHIR service. There are four parts of this provisioning process.
+After you obtain the required prerequisites, the next phase of deployment is to create a workspace and provision instances of the Event Hubs service, FHIR service, and MedTech service. You must also give the Event Hubs permission to read data from your device and give the MedTech service permission to read and write to the FHIR service. There are four parts of this provisioning process.
### Create a resource group and workspace
The MedTech service persists the data to the FHIR store using the system-managed
## Step 3: Configure MedTech for deployment
-After you have fulfilled the prerequisites and provisioned your services, the next phase of deployment is to configure MedTech services to ingest data, set up device mappings, and set up destination mappings. These configuration settings will ensure that the data can be translated from your device to Observations in the FHIR service. There are four parts in this configuration process.
+After you've fulfilled the prerequisites and provisioned your services, the next phase of deployment is to configure MedTech services to ingest data, set up device mappings, and set up destination mappings. These configuration settings will ensure that the data can be translated from your device to Observations in the FHIR service. There are four parts in this configuration process.
### Configuring MedTech service to ingest data
-MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about deploying MedTech service using the Azure portal, see [Overview of how to manually deploy the MedTech service using the Azure portal
-](deploy-03-new-manual.md) and [Prerequisites for manually deploying the MedTech service using the Azure portal](deploy-03-new-manual.md).
+MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about deploying MedTech service using the Azure portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md) and [Prerequisites for manually deploying the MedTech service using the Azure portal](deploy-03-new-manual.md).
Once you have starting using the portal and added MedTech service to your workspace, you must then configure MedTech service to ingest data from an event hub. For more information about configuring MedTech service to ingest data, see [Configure the MedTech service to ingest data](deploy-05-new-config.md). ### Configuring device mappings
-You must configure MedTech to map it to the device you want to receive data from. Each device has unique settings that MedTech service must use. For more information on how to use Device mappings, see [How to use Device mappings](./how-to-use-device-mappings.md).
+You must configure MedTech to map it to the device you want to receive data from. Each device has unique settings that MedTech service must use. For more information on how to use Device mappings, see [How to use Device mappings](how-to-use-device-mappings.md).
-- Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper) that will help you map your device's data structure to a form that MedTech can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping).
+- Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper). The IoMT Connector Data Mapper will help you map your device's data structure to a form that MedTech can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping).
-- When you are deploying MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the Device mapping properties](deploy-05-new-config.md).
+- When you're deploying MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the Device mapping properties](deploy-05-new-config.md).
### Configuring destination mappings Once your device's data is properly mapped to your device's data format, you must then map it to an Observation in the FHIR service. For an overview of FHIR destination mappings, see [How to use the FHIR destination mappings](how-to-use-fhir-mappings.md). For step-by-step destination property mapping, see [Configure destination properties](deploy-05-new-config.md).
-).
### Create and deploy the MedTech service
-If you have completed the prerequisites, provisioning, and configuration, you are now ready to deploy the MedTech service. Create and deploy your MedTech service by following the procedures at [Create your MedTech service](deploy-06-new-deploy.md).
+If you've completed the prerequisites, provisioning, and configuration, you're now ready to deploy the MedTech service. Create and deploy your MedTech service by following the procedures at [Create your MedTech service](deploy-06-new-deploy.md).
## Step 4: Connect to required services (post deployment)
For more information about application roles, see [Authentication and Authorizat
## Step 5: Send the data for processing
-When MedTech service is deployed and connected to the Event Hubs and FHIR services, it is ready to process data from a device and translate it into a FHIR service Observation. There are three parts of the sending process.
+When MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process data from a device and translate it into a FHIR service Observation. There are three parts of the sending process.
### Data sent from Device to Event Hubs
-The data is sent to an Event Hub instance so that it can wait until MedTech service is ready to receive it. The data transfer needs to be asynchronous because it is sent over the Internet and delivery times cannot be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
+The data is sent to an Event Hubs instance so that it can wait until MedTech service is ready to receive it. The data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
For more information about Event Hubs, see [Event Hubs](../../event-hubs/event-hubs-about.md).
MedTech processes the data in five steps:
- Transform - Persist
-If the processing was successful and you did not get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
+If the processing was successful and you didn't get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
-For more details on the data flow through MedTech, see [MedTech service data flow](iot-data-flow.md).
+For more information on the data flow through MedTech, see [MedTech service data flow](iot-data-flow.md).
## Step 6: Verify the processed data
-You can verify that the data was processed correctly by checking to see if there is now a new Observation resource in the FHIR service. If the data isn't mapped or if the mapping isn't authored properly, the data will be skipped. If there are any problems, check the [device mapping](how-to-use-device-mappings.md) or the [FHIR destination mapping](how-to-use-fhir-mappings.md).
+You can verify that the data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the data isn't mapped or if the mapping isn't authored properly, the data will be skipped. If there are any problems, check the [device mapping](how-to-use-device-mappings.md) or the [FHIR destination mapping](how-to-use-fhir-mappings.md).
### Metrics
-You can verify that the data is correctly persisted into the FHIR service by using the [MedTech service metrics](how-to-display-metrics.md) in the Azure portal.
+You can verify that the data is correctly persisted into the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal.
## Next steps This article only described the basic steps needed to get started using MedTech service. For information about deploying MedTech service in the workspace, see
->[!div class="nextstepaction"]
->[Deploy the MedTech service in the Azure portal](deploy-iot-connector-in-azure.md)
+> [!div class="nextstepaction"]
+> [Deploy the MedTech service in the Azure portal](deploy-iot-connector-in-azure.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
+
+ Title: Configure the MedTech service metrics - Azure Health Data Services
+description: This article explains how to display MedTech service metrics.
+++++ Last updated : 10/12/2022+++
+# How to configure the MedTech service metrics
+
+In this article, you'll learn how to configure the [MedTech service](iot-connector-overview.md) metrics in the Azure portal. You'll also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
+
+The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
+
+## Metric types for the MedTech service
+
+This table shows the available MedTech service metrics and the information that the metrics are capturing and displaying within the Azure portal:
+
+Metric category|Metric name|Metric description|
+|--|--|--|
+|Availability|IotConnector Health Status|The overall health of the MedTech service.|
+|Errors|Total Error Count|The total number of errors.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](iot-data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](iot-data-flow.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](iot-data-flow.md#persist) by the MedTech service.|
+|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](iot-data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](iot-data-flow.md#transform) of the MedTech service.|
+|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
+|Traffic|Number of Normalized Messages|The number of normalized messages.|
+
+## Configure the MedTech service metrics
+
+1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
+
+ :::image type="content" source="media\iot-metrics-display\workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\iot-metrics-display\workspace-displayed-with-connectors-button.png":::
+
+2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service within your own Azure Health Data Services workspace.
+
+ :::image type="content" source="media\iot-metrics-display\select-medtech-service.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-metrics-display\select-medtech-service.png":::
+
+3. Select **Metrics** within the MedTech service page.
+
+ :::image type="content" source="media\iot-metrics-display\select-metrics-under-monitoring.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\iot-metrics-display\select-metrics-under-monitoring.png":::
+
+4. The MedTech service metrics page will open allowing you to use the drop-down menus to view and select the metrics that are available for the MedTech service.
+
+ :::image type="content" source="media\iot-metrics-display\select-metrics-to-display.png" alt-text="Screenshot the MedTech service metrics page with drop-down menus." lightbox="media\iot-metrics-display\select-metrics-to-display.png":::
+
+5. Select the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
+
+ * **Scope** = Your MedTech service name (**Default**)
+ * **Metric Namespace** = Standard metrics (**Default**)
+ * **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
+ * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
+
+6. You can now see your MedTech service metrics for **Number of Incoming Messages** displayed on the MedTech service metrics page.
+
+ :::image type="content" source="media\iot-metrics-display\select-metrics-being-displayed.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\select-metrics-being-displayed.png":::
+
+7. You can add more metrics for your MedTech service by selecting **Add metric**.
+
+ :::image type="content" source="media\iot-metrics-display\select-add-metric.png" alt-text="Screenshot of select Add metric to add more MedTech service metrics." lightbox="media\iot-metrics-display\select-add-metric.png":::
+
+8. Then select the metrics that you would like to add to your MedTech service.
+
+ :::image type="content" source="media\iot-metrics-display\select-more-metrics.png" alt-text="Screenshot of select more metrics to add to your MedTech service." lightbox="media\iot-metrics-display\select-more-metrics.png":::
+
+ > [!IMPORTANT]
+ > If you leave the MedTech service metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure portal dashboard as a tile.
+ >
+ > To learn how to to create an Azure portal dashboard and pin tiles, see [How to create an Azure portal dashboard and pin tiles](how-to-configure-metrics.md#how-to-create-an-azure-portal-dashboard-and-pin-tiles)
+
+ > [!TIP]
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+
+## How to create an Azure portal dashboard and pin tiles
+
+To learn how to create an Azure portal dashboard and pin tiles, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards)
+
+## Next steps
+
+To learn how to enable the MedTech service diagnostic settings to export logs and metrics to another location (for example: an Azure storage account) for audit, backup, or troubleshooting, see
+
+> [!div class="nextstepaction"]
+> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
- Title: Display the MedTech service metrics - Azure Health Data Services
-description: This article explains how to display MedTech service metrics.
----- Previously updated : 08/09/2022---
-# How to display and configure the MedTech service metrics
-
-In this article, you'll learn how to display and configure the [MedTech service](iot-connector-overview.md) metrics in the Azure portal. You'll also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
-
-The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
-
-## Metric types for the MedTech service
-
-This table shows the available MedTech service metrics and the information that the metrics are capturing and displaying within the Azure portal:
-
-Metric category|Metric name|Metric description|
-|--|--|--|
-|Availability|IotConnector Health Status|The overall health of the MedTech service.|
-|Errors|Total Error Count|The total number of errors.|
-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](iot-data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](iot-data-flow.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](iot-data-flow.md#persist) by the MedTech service.|
-|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](iot-data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](iot-data-flow.md#transform) of the MedTech service.|
-|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
-|Traffic|Number of Normalized Messages|The number of normalized messages.|
-
-## Display and configure the MedTech service metrics
-
-1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
-
- :::image type="content" source="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png":::
-
-2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll select your own MedTech service.
-
- :::image type="content" source="media\iot-metrics-display\iot-connector-select.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-metrics-display\iot-connector-select.png":::
-
-3. Select **Metrics** within the MedTech service page.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-metrics.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\iot-metrics-display\iot-select-metrics.png":::
-
-4. The MedTech service metrics page will open allowing you to use the drop-down menus to view and select the metrics that are available for the MedTech service.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-opening-page.png" alt-text="Screenshot the MedTech service metrics page with drop-down menus." lightbox="media\iot-metrics-display\iot-metrics-opening-page.png":::
-
-5. Select the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
-
- * **Scope** = Your MedTech service name (**Default**)
- * **Metric Namespace** = Standard metrics (**Default**)
- * **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
- * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
-
-6. You can now see your MedTech service metrics for **Number of Incoming Messages** displayed on the MedTech service metrics page.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-options.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\iot-metrics-select-options.png":::
-
-7. You can add more metrics by selecting **Add metric**.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-add-metric.png" alt-text="Screenshot of select Add metric to add more MedTech service metrics." lightbox="media\iot-metrics-display\iot-select-add-metric.png":::
-
-8. Then select the metrics that you would like to add to your MedTech service.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-more-metrics.png" alt-text="Screenshot of select more metrics to add to your MedTech service." lightbox="media\iot-metrics-display\iot-metrics-select-more-metrics.png":::
-
- > [!TIP]
- >
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
-
- > [!IMPORTANT]
- >
- > If you leave the MedTech service metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
-
-## How to pin the MedTech service metrics tile to an Azure portal dashboard
-
-1. To pin the MedTech service metrics tile to an Azure portal dashboard, select the **Pin to dashboard** option.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard option." lightbox="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png":::
-
-2. Select the dashboard you would like to display your MedTech service metrics to by using the drop-down menu. For this example, we'll use a private dashboard named **Azuredocsdemo_Dashboard**. Select **Pin** to add your MedTech service metrics tile to the dashboard.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin options to complete the dashboard pinning process." lightbox="media\iot-metrics-display\iot-select-pin-to-dashboard.png":::
-
-3. You'll receive a confirmation that your MedTech service metrics tile was successfully added to your selected Azure portal dashboard.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png" alt-text="Screenshot of metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png":::
-
-4. Once you've received a successful confirmation, select the **Dashboard** option.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard option." lightbox="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png":::
-
-5. Use the drop-down menu to select the dashboard that you pinned your MedTech service metrics tile. For this example, the dashboard is named **Azuredocsdemo_Dashboard**.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-pin.png" alt-text="Screenshot of selecting dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-select-dashboard-with-metrics-pin.png":::
-
-6. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-display-dashboard-with-metrics-pin.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-metrics-display-dashboard-with-metrics-pin.png":::
-
-## Next steps
-
-To learn how to configure the diagnostic settings and export the MedTech service metrics to another location (for example: an Azure storage account), see
-
-> [!div class="nextstepaction"]
-> [How to configure diagnostic settings for exporting the MedTech service metrics](iot-metrics-diagnostics-export.md)
-
-To learn about the MedTech service frequently asked questions (FAQs), see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](iot-connector-faqs.md)
-
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
+
+ Title: How to enable the MedTech service diagnostic settings - Azure Health Data Services
+description: This article explains how to configure the MedTech service diagnostic settings.
+++++ Last updated : 10/13/2022+++
+# How to enable diagnostic settings for the MedTech service
+
+In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs to different destinations (for example: to [Azure storage](/azure/storage/) or an [Azure event hub](/azure/event-hubs/)) for audit, analysis, or backup.
+
+## Create a diagnostic setting for the MedTech service
+1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-medtech-service-in-workspace.png" alt-text="Screenshot of select the MedTech service within workspace." lightbox="media/iot-diagnostic-settings/select-medtech-service-in-workspace.png":::
+
+2. Select the MedTech service that you want to enable a diagnostic setting for. For this example, we'll be using a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service within your own Azure Health Data Services workspace.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-medtech-service.png" alt-text="Screenshot of select the MedTech service for exporting metrics." lightbox="media/iot-diagnostic-settings/select-medtech-service.png":::
+
+3. Select the **Diagnostic settings** option under **Monitoring**.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-diagnostic-settings.png" alt-text="Screenshot of select the Diagnostic settings." lightbox="media/iot-diagnostic-settings/select-diagnostic-settings.png":::
+
+4. Select the **+ Add diagnostic setting** option.
+
+ :::image type="content" source="media/iot-diagnostic-settings/add-diagnostic-settings.png" alt-text="Screenshot of select the + Add diagnostic setting." lightbox="media/iot-diagnostic-settings/add-diagnostic-settings.png":::
+
+5. The **+ Add diagnostic setting** page will open, requiring configuration inputs from you.
+
+ 1. Enter a display name in the **Diagnostic setting name** box. For this example, we'll name it **MedTech_service_All_Logs_and_Metrics**. You'll enter a display name of your own choosing.
+
+ 2. Under **Logs**, select the **AllLogs** option.
+
+ 3. Under **Metrics**, select the **AllMetrics** option.
+
+ > [!Note]
+ > To view a complete list of MedTech service metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
+
+ 4. Under **Destination details**, select the destination you want to use for your exported MedTech service metrics. In this example, we've selected an Azure storage account named **azuredocsdemostorage**. You'll select a destination of your own choosing.
+
+ > [!Important]
+ > Each **Destination details** selection requires that certain resources (for example, an existing Azure storage account) be created and available before the selection can be successfully configured. Choose each selection to see which resources are required.
+
+ Metrics can be exported to the following destinations:
+
+ |Destination|Description|
+ |--|--|
+ |Log Analytics workspace|Metrics are converted to log form. Sending the metrics to the Azure Monitor Logs store (which is searchable via Log Analytics) enables you to integrate them into queries, alerts, and visualizations with existing log data.|
+ |Azure storage account|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive, and logs can be kept there indefinitely.|
+ |Azure event hub|Sending logs and metrics to an event hub allows you to stream data to external systems such as third-party Security Information and Event Managements (SIEMs) and other Log Analytics solutions.|
+ |Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.|
+
+ 5. Select the **Save** option to save your diagnostic setting selections.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-all-logs-and-metrics.png" alt-text="Screenshot of diagnostic setting and required fields." lightbox="media/iot-diagnostic-settings/select-all-logs-and-metrics.png":::
+
+6. Once you've selected the **Save** option, the page will display a message that the diagnostic setting for your MedTech service has been saved successfully.
+
+ :::image type="content" source="media/iot-diagnostic-settings/diagnostic-settings-successfully-saved.png" alt-text="Screenshot of a successful diagnostic setting save." lightbox="media/iot-diagnostic-settings/diagnostic-settings-successfully-saved.png":::
+
+ > [!Note]
+ > It might take up to 15 minutes for the first MedTech service metrics to display in the destination of your choice.
+
+7. To view your saved diagnostic setting, select **Diagnostic settings**.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-diagnostic-settings-banner.png" alt-text="Screenshot of Diagnostic settings option to view the saved diagnostic setting." lightbox="media/iot-diagnostic-settings/select-diagnostic-settings-banner.png":::
+
+8. The **Diagnostic settings** page will open, displaying your newly created diagnostic setting for your MedTech service. You'll have the ability to:
+
+ 1. **Edit setting**: Edit or delete your saved MedTech service diagnostic setting.
+ 2. **+ Add diagnostic setting**: Create more diagnostic settings for your MedTech service (for example: you may also want to send your MedTech service metrics to another destination like a Logs Analytics workspace).
+
+ :::image type="content" source="media/iot-diagnostic-settings/view-and-edit-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings options." lightbox="media/iot-diagnostic-settings/view-and-edit-diagnostic-settings.png":::
+
+ > [!TIP]
+ > For more information about how to work with diagnostic settings, see [Diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
+ >
+ > For more information about how to work with diagnostic logs, see the [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md).
+
+## Next steps
+
+To view the frequently asked questions (FAQs) about the MedTech service, see
+
+>[!div class="nextstepaction"]
+>[MedTech service FAQs](iot-connector-faqs.md)
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
+
+ Title: How to use MedTech service metrics tab - Azure Health Data Services
+description: This article explains how to use MedTech service metrics tab.
+++++ Last updated : 10/10/2022+++
+# How to use the MedTech service monitoring tab
+
+In this article, you'll learn how to use the [MedTech service](iot-connector-overview.md) monitoring tab in the Azure portal. The monitoring tab provides access to crucial MedTech service metrics. These metrics can be used in assessing the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
+
+## Use the MedTech service monitoring tab
+
+1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
+
+ :::image type="content" source="media\iot-monitoring-tab\workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\iot-monitoring-tab\workspace-displayed-with-connectors-button.png":::
+
+2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service within your own Azure Health Data Services workspace.
+
+ :::image type="content" source="media\iot-monitoring-tab\select-medtech-service.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-monitoring-tab\select-medtech-service.png":::
+
+3. Select **Monitoring** tab within the MedTech service page.
+
+ :::image type="content" source="media\iot-monitoring-tab\select-monitoring-tab.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\iot-monitoring-tab\select-monitoring-tab.png":::
+
+4. The MedTech service monitoring tab will open displaying a subset of the supported MedTech service metrics. By default, the **Show data for last** option is set to 1 hour. To adjust the time duration, select the **Show data for last option**, select the time period you would like to view, and select **Apply**. Select the down arrow in the **Traffic** MedTech service metrics tile to display the next set of MedTech service traffic metrics.
+
+ :::image type="content" source="media\iot-monitoring-tab\display-metrics-tile.png" alt-text="Screenshot the MedTech service monitoring tab with drop-down menus." lightbox="media\iot-monitoring-tab\display-metrics-tile.png":::
+
+5. Select the pin icon to pin the tile to an Azure portal dashboard of your choosing.
+
+ :::image type="content" source="media\iot-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\iot-monitoring-tab\pin-metrics-to-dashboard.png":::
+
+ > [!IMPORTANT]
+ > If you leave the MedTech service monitoring tab, any customized settings you have made to the monitoring settings are lost and will have to be recreated. If you would like to save your customizations for future viewing, you can pin them to an Azure portal dashboard as a tile.
+ >
+ > To learn how to customize and save metrics settings to an Azure portal dashboard and tile, see [How to configure the MedTech service metrics](how-to-configure-metrics.md).
+
+ > [!TIP]
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure/azure/azure-monitor/essentials/metrics-getting-started)
+
+## Available metrics for the MedTech service
+
+This table shows the available MedTech service metrics and the information that the metrics are capturing. The metrics in **bold** are the metrics displayed within the **Monitoring** tab:
+
+Metric category|Metric name|Metric description|
+|--|--|--|
+|Availability|IotConnector Health Status|The overall health of the MedTech service.|
+|Errors|**Total Error Count**|The total number of errors.|
+|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](iot-data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](iot-data-flow.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](iot-data-flow.md#persist) by the MedTech service.|
+|Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](iot-data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|**Number of Measurements**|The number of normalized value readings received by the FHIR [transformation stage](iot-data-flow.md#transform) of the MedTech service.|
+|Traffic|**Number of Message Groups**|The number of groups that have messages aggregated in the designated time window.|
+|Traffic|**Number of Normalized Messages**|The number of normalized messages.|
+
+## Next steps
+
+To learn how to configure the MedTech service metrics, see
+
+> [!div class="nextstepaction"]
+> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
+
+To learn how to configure the MedTech service diagnostic settings to export logs to another location (for example: an Azure storage account) for audit, backup, or troubleshooting, see
+
+> [!div class="nextstepaction"]
+> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
- Title: How to configure the MedTech service diagnostic settings for metrics export - Azure Health Data Services
-description: This article explains how to configure the MedTech service diagnostic settings for metrics exporting.
----- Previously updated : 08/17/2022---
-# How to configure diagnostic settings for exporting the MedTech service metrics
-
-In this article, you'll learn how to configure diagnostic settings for the MedTech service to export metrics to different destinations (for example: to [Azure storage](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, or backup.
-
-## Create a diagnostic setting for the MedTech service
-1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
-
- :::image type="content" source="media/iot-metrics-export/iot-select-medtech-service-in-workspace.png" alt-text="Screenshot of select the MedTech service within workspace." lightbox="media/iot-metrics-export/iot-select-medtech-service-in-workspace.png":::
-
-2. Select the MedTech service that you want to configure for metrics exporting. For this example, we'll be using a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service named by you within your Azure Health Data Services workspace.
-
- :::image type="content" source="media/iot-metrics-export/iot-select-medtech-service.png" alt-text="Screenshot of select the MedTech service for exporting metrics." lightbox="media/iot-metrics-export/iot-select-medtech-service.png":::
-
-3. Select the **Diagnostic settings** option under **Monitoring**.
-
- :::image type="content" source="media/iot-metrics-export/iot-select-diagnostic-settings.png" alt-text="Screenshot of select the Diagnostic settings." lightbox="media/iot-metrics-export/iot-select-diagnostic-settings.png":::
-
-4. Select the **+ Add diagnostic setting** option.
-
- :::image type="content" source="media/iot-metrics-export/iot-add-diagnostic-setting.png" alt-text="Screenshot of select the + Add diagnostic setting." lightbox="media/iot-metrics-export/iot-add-diagnostic-setting.png":::
-
-5. The **+ Add diagnostic setting** page will open, requiring configuration inputs from you.
-
- :::image type="content" source="media/iot-metrics-export/iot-select-diagnostic-setting-options.png" alt-text="Screenshot of diagnostic setting and required fields." lightbox="media/iot-metrics-export/iot-select-diagnostic-setting-options.png":::
-
- 1. Enter a display name in the **Diagnostic setting name** box. For this example, we'll name it **MedTech_service_All_Metrics**. You'll enter a display name of your own choosing.
-
- 2. Under **Metrics**, select the **AllMetrics** option.
-
- > [!Note]
- >
- > The **AllMetrics** option is the only option available and will export all currently supported MedTech service metrics.
- >
- > To view a complete list of MedTech service metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
-
- 3. Under **Destination details**, select the destination you want to use for your exported MedTech service metrics. In this example, we've selected an Azure storage account. You'll select a destination of your own choosing.
-
- > [!Important]
- >
- > Each **Destination details** selection requires that certain resources (for example, an existing Azure storage account) be created and available before the selection can be successfully configured. Choose each selection to see which resources are required.
-
- Metrics can be exported to the following destinations:
-
- |Destination|Description|
- |--|--|
- |Log Analytics workspace|Metrics are converted to log form. Sending the metrics to the Azure Monitor Logs store (which is searchable via Log Analytics) enables you to integrate them into queries, alerts, and visualizations with existing log data.|
- |Azure storage account|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive, and logs can be kept there indefinitely.|
- |Azure event hub|Sending logs and metrics to an event hub allows you to stream data to external systems such as third-party Security Information and Event Managements (SIEMs) and other Log Analytics solutions.|
- |Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.|
-
- 4. Select the **Save** option to save your diagnostic setting selections.
-
-6. Once you've selected the **Save** option, the page will display a message that the diagnostic setting for your MedTech service has been saved successfully.
-
- :::image type="content" source="media/iot-metrics-export/iot-successful-save-diagnostic-setting.png" alt-text="Screenshot of a successful diagnostic setting save." lightbox="media/iot-metrics-export/iot-successful-save-diagnostic-setting.png":::
-
- > [!Note]
- >
- > It might take up to 15 minutes for the first MedTech service metrics to display in the destination of your choice.
-
-7. To view your saved diagnostic setting, select **Diagnostic settings**.
-
- :::image type="content" source="media/iot-metrics-export/iot-navigate-to-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings option to view the saved diagnostic setting." lightbox="media/iot-metrics-export/iot-navigate-to-diagnostic-settings.png":::
-
-8. The **Diagnostic settings** page will open, displaying your newly created diagnostic setting for your MedTech service. You'll have the ability to:
-
- 1. **Edit setting**: Edit or delete your saved MedTech service diagnostic setting.
- 2. **+ Add diagnostic setting**: Create more diagnostic settings for your MedTech service (for example: you may also want to send your MedTech service metrics to another destination like a Logs Analytics workspace).
-
- :::image type="content" source="media/iot-metrics-export/iot-view-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings options." lightbox="media/iot-metrics-export/iot-view-diagnostic-settings.png":::
-
- > [!TIP]
- >
- > For more information about how to work with diagnostic logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md).
-
-## Next steps
-
-To view the frequently asked questions (FAQs) about the MedTech service, see
-
->[!div class="nextstepaction"]
->[MedTech service FAQs](iot-connector-faqs.md)
healthcare-apis Iot Troubleshoot Error Messages And Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-error-messages-and-conditions.md
This article provides steps for troubleshooting and fixing MedTech service error messages and conditions. > [!IMPORTANT]
-> Having access to MedTech service metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](./how-to-display-metrics.md).
+> Having access to MedTech service metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](how-to-configure-metrics.md).
> [!TIP] > Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
healthcare-apis Iot Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-guide.md
- Title: MedTech service troubleshooting guides - Azure Health Data Services
-description: This article helps users troubleshoot MedTech service error messages and conditions and provides fixes.
----- Previously updated : 02/16/2022--
-# Troubleshoot MedTech service
-
-This article provides guides and resources to troubleshoot the MedTech service.
-
-> [!IMPORTANT]
-> Having access to the MedTech service Metrics is essential for monitoring and troubleshooting. The MedTech service assists you to do these actions through [Metrics](./how-to-display-metrics.md).
-
-> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-
-> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
-
-## MedTech service troubleshooting guides
-
-### Device and FHIR destination mappings
-
-* [Troubleshoot MedTech service Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings](./iot-troubleshoot-mappings.md)
-
-### Error messages and conditions
-
-* [Troubleshoot MedTech service error messages and conditions](./iot-troubleshoot-error-messages-and-conditions.md)
-
-### How-To
-* [How to display Metrics](./how-to-display-metrics.md)
-* [How to use Device mappings](./how-to-use-device-mappings.md)
-* [How to use FHIR destination mappings](./how-to-use-fhir-mappings.md)
-* [How to create file copies of mappings](./how-to-create-mappings-copies.md)
-
-## Next steps
-To learn about frequently asked questions (FAQs) about the MedTech service, see
-
->[!div class="nextstepaction"]
->[Frequently asked questions about the MedTech service](iot-connector-faqs.md)
-
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Troubleshoot Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-mappings.md
Previously updated : 02/16/2022 Last updated : 10/10/2022
This article provides the validation steps MedTech service performs on the Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings and can be used for troubleshooting mappings error messages and conditions. > [!IMPORTANT]
-> Having access to MedTech service Metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](./how-to-display-metrics.md).
+> Having access to MedTech service Metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](how-to-configure-metrics.md).
> [!TIP] > Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
healthcare-apis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/logging.md
Previously updated : 06/06/2022 Last updated : 10/10/2022
For more information about service logs and metrics for the DICOM service and Me
>[Enable diagnostic logging in the DICOM service](./dicom/enable-diagnostic-logging.md) >[!div class="nextstepaction"]
->[How to display MedTech service metrics](./../healthcare-apis/iot/how-to-display-metrics.md)
+>[How to enable diagnostic settings for the MedTech service](./../healthcare-apis/iot/how-to-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
import-export Storage Import Export Data To Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md
Before you create an import job to transfer data into Azure Files, carefully rev
- Have an active Azure subscription to use with Import/Export service. - Have at least one Azure Storage account. See the list of [Supported storage accounts and storage types for Import/Export service](storage-import-export-requirements.md).
- - Consider configuring large file shares on the storage account. During imports to Azure Files, if a file share doesn't have enough free space, auto splitting the data to multiple Azure file shares is no longer supported, and the copy will fail. For instructions, see [Configure large file shares on a storage account](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#enable-large-files-shares-on-an-existing-account).
+ - Consider configuring large file shares on the storage account. During imports to Azure Files, if a file share doesn't have enough free space, auto splitting the data to multiple Azure file shares is no longer supported, and the copy will fail. For instructions, see [Configure large file shares on a storage account](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#enable-large-file-shares-on-an-existing-account).
- For information on creating a new storage account, see [How to create a storage account](../storage/common/storage-account-create.md). - Have an adequate number of disks of [supported types](storage-import-export-requirements.md#supported-disks). - Have a Windows system running a [supported OS version](storage-import-export-requirements.md#supported-operating-systems).
Install-Module -Name Az.ImportExport
[!INCLUDE [storage-import-export-verify-data-copy](../../includes/storage-import-export-verify-data-copy.md)] > [!NOTE]
-> In the latest version of the Azure Import/Export tool for files (2.2.0.300), if a file share doesn't have enough free space, the data is no longer auto split to multiple Azure file shares. Instead, the copy fails, and you'll be contacted by Support. You'll need to either configure large file shares on the storage account or move around some data to make space in the share. For more information, see [Configure large file shares on a storage account](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#enable-large-files-shares-on-an-existing-account).
+> In the latest version of the Azure Import/Export tool for files (2.2.0.300), if a file share doesn't have enough free space, the data is no longer auto split to multiple Azure file shares. Instead, the copy fails, and you'll be contacted by Support. You'll need to either configure large file shares on the storage account or move around some data to make space in the share. For more information, see [Configure large file shares on a storage account](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#enable-large-file-shares-on-an-existing-account).
## Samples for journal files
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md
Common scenarios for sending optional payloads are:
* [Custom allocation policies](concepts-custom-allocation.md) can use the device payload to help select an IoT hub for a device or set its initial twin. For example, you may want to allocate your devices based on the device model. In this case, you can configure the device to report its model information when it registers. DPS will pass the deviceΓÇÖs payload to the custom allocation webhook. Then your webhook can decide which IoT hub the device will be provisioned to based on the device model information. If needed, the webhook can also return data back to the device as a JSON object in the webhook response. To learn more, see [Use device payloads in custom allocation](concepts-custom-allocation.md#use-device-payloads-in-custom-allocation).
-* [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+* [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
* [IoT Central](../iot-central/core/overview-iot-central.md) devices that connect through DPS *should* follow [IoT Plug and Play conventions](..//iot-develop/concepts-convention.md) and send their model ID when they register. IoT Central uses the model ID to assign the device to the correct device template. To learn more, see [Device implementation and best practices for IoT Central](../iot-central/core/concepts-device-implementation.md).
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
This is the baseline approach for any Windows VM that hosts Azure IoT Edge for L
If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server). ## Deployment on Windows VM on VMware ESXi
-Both Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions can host Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
-
->[!NOTE]
-> Per [VMware KB2009916](https://kb.vmware.com/s/article/2009916), currently nested virtualization is limited to Microsoft Hyper-V, strictly for VBS only and not for virtualizing multiple VMs. We are working to extend this support to EFLOW.
+Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions can host Azure IoT Edge for Linux on Windows on top of a Windows virtual machine. Read [VMware KB2009916](https://kb.vmware.com/s/article/2009916) for more information on VMware ESXi nested virtualization support.
To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows virtual machine, use the following steps: 1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html).
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
You can use the Event Hubs SDKs to read from the built-in endpoint in environmen
| Language | Sample | | -- | |
-| .NET | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/Getting%20Started/ReadD2cMessages) |
+| .NET | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/getting%20started/ReadD2cMessages) |
| Java | [read-d2c-messages Java](https://github.com/Azure-Samples/azure-iot-samples-java/tree/master/iot-hub/Quickstarts/read-d2c-messages) | | Node.js | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) | | Python | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) |
You can use the Event Hubs SDKs to read from the built-in endpoint in environmen
For more detail, see the [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial.
-* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
+* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
From the moment your device runs, an order of operations activates:
1. The Logic App processes the HTTP request based on a condition you set 1. The Logic App logs connection or disconnection events into a new document in Cosmos DB
+ :::image type="content" source="media/iot-hub-how-to-order-connection-state-events/event-grid-setup.png" alt-text="Screenshot of the setup we'll create for this article. This setup shows how all services and devices are connected." lightbox="media/iot-hub-how-to-order-connection-state-events/event-grid-setup.png":::
+ <!-- A sequence number is used in the *Device Connected* and *Device Disconnected* to track and order events.
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<system
## Log in to the VM
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure/virtual-machines/linux-vm-connect) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
To log into a Linux VM, you can use the ssh command with the \<publicIpAddress\> given in the [Create a virtual machine](#create-a-virtual-machine) step:
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Set-AzKeyVaultAccessPolicy -ResourceGroupName <YourResourceGroupName> -VaultName
## Sign in to the virtual machine
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure Windows virtual machine](../../virtual-machines/windows/connect-logon.md) or [Connect and sign in to an Azure Linux virtual machine](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure Windows virtual machine](../../virtual-machines/windows/connect-logon.md) or [Connect and sign in to an Azure Linux virtual machine](/azure/virtual-machines/linux-vm-connect).
## Set up the console app
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<system
## Log in to the VM
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure/virtual-machines/linux-vm-connect) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
To log into a Linux VM, you can use the ssh command with the \<publicIpAddress\> given in the [Create a virtual machine](#create-a-virtual-machine) step:
lab-services Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-overview.md
[!INCLUDE [preview note](./includes/lab-services-new-update-note.md)]
-**Azure Lab Services** lets you create labs whose infrastructure is managed by Azure. The service itself handles all the infrastructure management, from spinning up VMs to handling errors and scaling the infrastructure. Azure Lab Services was designed with three major personas in mind: [administrators, educators, and students](classroom-labs-concepts.md#user-profiles). After an IT administrator creates a lab plan, an educator can quickly set up a lab for the class. Educators specify the number and type of VMs needed, configures the template VM, and adds users to the class. Once a user registers to the class, the user can access the VM to do exercises for the class.
+**Azure Lab Services** lets you create labs whose infrastructure is managed by Azure. The service itself handles all the infrastructure management, from spinning up virtual machines (VMs) to handling errors and scaling the infrastructure. Azure Lab Services was designed with three major personas in mind: [administrators, educators, and students](classroom-labs-concepts.md#user-profiles). After an IT administrator creates a lab plan, an educator can quickly set up a lab for the class. Educators specify the number and type of VMs needed, configures the template VM, and adds users to the class. Once a user registers to the class, the user can access the VM to do exercises for the class.
To [create a lab](tutorial-setup-lab.md), you need to [create a lab plan](tutorial-setup-lab-plan.md) for your organization first. The lab plan serves as a collection of configurations and settings that apply to the labs created from it.
load-testing Tutorial Identify Performance Regression With Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-performance-regression-with-cicd.md
Before you configure the CI/CD pipeline to run a load test, you'll grant the CI/
# [Azure Pipelines](#tab/pipelines)
-To access your Azure Load Testing resource from the Azure Pipelines workflow, you first create a service connection in your Azure DevOps project. The service connection creates an Azure Active Directory [service principal](/active-directory/develop/app-objects-and-service-principals#service-principal-object). This service principal represents your Azure Pipelines workflow in Azure Active Directory.
+To access your Azure Load Testing resource from the Azure Pipelines workflow, you first create a service connection in your Azure DevOps project. The service connection creates an Azure Active Directory [service principal](/azure/active-directory/develop/app-objects-and-service-principals#service-principal-object). This service principal represents your Azure Pipelines workflow in Azure Active Directory.
Next, you grant permissions to this service principal to create and run a load test with your Azure Load Testing resource.
To grant access to your Azure Load Testing resource, assign the Load Test Contri
# [GitHub Actions](#tab/github)
-To access your Azure Load Testing resource from the GitHub Actions workflow, you first create an Azure Active Directory [service principal](/active-directory/develop/app-objects-and-service-principals#service-principal-object). This service principal represents your GitHub Actions workflow in Azure Active Directory.
+To access your Azure Load Testing resource from the GitHub Actions workflow, you first create an Azure Active Directory [service principal](/azure/active-directory/develop/app-objects-and-service-principals#service-principal-object). This service principal represents your GitHub Actions workflow in Azure Active Directory.
Next, you grant permissions to the service principal to create and run a load test with your Azure Load Testing resource.
To grant access to your Azure Load Testing resource, assign the Load Test Contri
You'll now create a CI/CD workflow to create and run a load test for the sample application. The sample application repository already contains a CI/CD workflow definition that first deploys the application to Azure, and then creates a load test based on JMeter test script (*SampleApp.jmx*). You'll update the sample workflow definition file to specify the Azure subscription and application details.
-On the first CI/CD workflow run, it creates a new Azure Load Testing resource in your Azure subscription by using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template. Learn more about ARM templates [here](/azure-resource-manager/templates/overview).
+On the first CI/CD workflow run, it creates a new Azure Load Testing resource in your Azure subscription by using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template. Learn more about [ARM templates](/azure/azure-resource-manager/templates/overview).
# [Azure Pipelines](#tab/pipelines)
logic-apps Logic Apps Create Logic Apps From Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-logic-apps-from-templates.md
Title: Create logic app workflows faster with prebuilt templates
-description: Quickly build logic app workflows with prebuilt templates in Azure Logic Apps, and find out about available templates.
+ Title: Create Consumption workflows faster with prebuilt templates
+description: To quickly build a Consumption logic app workflow, start with a prebuilt template in Azure Logic Apps.
ms.suite: integration Previously updated : 10/12/2022 Last updated : 10/19/2022 #Customer intent: As an Azure Logic Apps developer, I want to build a logic app workflow from a template so that I can reduce development time.
-# Create a logic app workflow from a prebuilt template
+# Create a Consumption logic app workflow from a prebuilt template
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-To get you started creating workflows quickly, Azure Logic Apps provides templates, which are prebuilt logic app workflows that follow commonly used patterns.
+To get you started creating workflows quickly, Azure Logic Apps provides prebuilt templates for logic app workflows that follow commonly used patterns.
+
+> [!NOTE]
+>
+> Workflow templates and the workflow template gallery are currently available only for Consumption logic app workflows.
This how-to guide shows how to use these templates as provided or edit them to fit your scenario.
-Here are some template categories:
+## Template categories
-| Template type | Description |
-| - | -- |
-| Enterprise cloud | For integrating Azure Blob Storage, Dynamics CRM, Salesforce, and Box. Also includes other connectors for your enterprise cloud needs. For example, you can use these templates to organize business leads or back up your corporate file data. |
-| Personal productivity | For improving personal productivity. You can use these templates to set daily reminders, turn important work items into to-do lists, and automate lengthy tasks down to a single user-approval step. |
-| Consumer cloud | For integrating social media services such as Twitter, Slack, and email. Useful for strengthening social media marketing initiatives. These templates also include tasks such as cloud copying, which increases productivity by saving time on traditionally repetitive tasks. |
-| Enterprise integration pack | For configuring validate, extract, transform, enrich, and route (VETER) pipelines. Also for receiving an X12 EDI document over AS2 and transforming it to XML, and for handling X12, EDIFACT, and AS2 messages. |
-| Protocol pattern | For implementing protocol patterns such as request-response over HTTP and integrations across FTP and SFTP. Use these templates as provided, or build on them for complex protocol patterns. |
-|||
+| Template type | Description |
+| - | -- |
+| Enterprise cloud | For integrating Azure Blob Storage, Dynamics CRM, Salesforce, and Box. Also includes other connectors for your enterprise cloud needs. For example, you can use these templates to organize business leads or back up your corporate file data. |
+| Personal productivity | For improving personal productivity. You can use these templates to set daily reminders, turn important work items into to-do lists, and automate lengthy tasks down to a single user-approval step. |
+| Consumer cloud | For integrating social media services such as Twitter, Slack, and email. Useful for strengthening social media marketing initiatives. These templates also include tasks such as cloud copying, which increases productivity by saving time on traditionally repetitive tasks. |
+| Enterprise integration pack | For configuring validate, extract, transform, enrich, and route (VETER) pipelines. Also for receiving an X12 EDI document over AS2 and transforming it to XML, and for handling X12, EDIFACT, and AS2 messages. |
+| Protocol pattern | For implementing protocol patterns such as request-response over HTTP and integrations across FTP and SFTP. Use these templates as provided, or build on them for complex protocol patterns. |
## Prerequisites - An Azure account and subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A basic understanding of how to build a logic app workflow. For more information, see [Create a Consumption logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-## Create a logic app workflow from a template
+- Basic understanding about how to build a logic app workflow. For more information, see [Create your first Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md).
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Create a Consumption workflow from a template
-1. Select **Create a resource** > **Integration** > **Logic App**.
+1. In the [Azure portal](https://portal.azure.com), from the home page, select **Create a resource** > **Integration** > **Logic App**.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/azure-portal-create-logic-app.png" alt-text="Screenshot of the Azure portal. Under 'Popular Azure services,' 'Logic App' is highlighted. On the navigation menu, 'Integration' is highlighted.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/azure-portal-create-logic-app.png" alt-text="Screenshot showing the Azure portal. On the navigation menu, 'Integration' is selected. Under 'Popular Azure services', 'Logic App' is selected.":::
-1. In the **Create Logic App** page, enter the following values:
+1. On the **Create Logic App** page, enter the following values:
- | Setting | Value | Description |
- | - | -- | -- |
+ | Setting | Value | Description |
+ | - | -- | -- |
| **Subscription** | <*your-Azure-subscription-name*> | Select the Azure subscription that you want to use. | | **Resource Group** | <*your-Azure-resource-group-name*> | Create or select an [Azure resource group](../azure-resource-manager/management/overview.md) for this logic app resource and its associated resources. | | **Logic App name** | <*your-logic-app-name*> | Provide a unique logic app resource name. | | **Region** | <*your-Azure-datacenter-region*> | Select the datacenter region for deploying your logic app, for example, **West US**. | | **Enable log analytics** | **No** (default) or **Yes** | To set up [diagnostic logging](../logic-apps/monitor-logic-apps-log-analytics.md) for your logic app resource by using [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), select **Yes**. This selection requires that you already have a Log Analytics workspace. |
- | **Plan type** | **Consumption** or **Standard** | Select **Consumption** to create a Consumption logic app workflow. |
+ | **Plan type** | **Consumption** or **Standard** | Select **Consumption** to create a Consumption logic app workflow from a template. |
| **Zone redundancy** | **Disabled** (default) or **Enabled** | If this option is available, select **Enabled** if you want to protect your logic app resource from a regional failure. But first [check that zone redundancy is available in your Azure region](/azure/logic-apps/set-up-zone-redundancy-availability-zones?tabs=consumption#considerations). |
- ||||
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png" alt-text="Screenshot of the 'Create Logic App' page. The 'Consumption' plan type is selected, and values are visible in other input fields.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-settings.png" alt-text="Screenshot showing the 'Create Logic App' page with example property values provided and the 'Consumption' plan type selected.":::
1. Select **Review + Create**.
Here are some template categories:
:::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/create-logic-app.png" alt-text="Screenshot of the 'Create Logic App' page. The name, subscription, and other values are visible, and the 'Create' button is highlighted.":::
-1. When deployment is complete, select **Go to resource**. The designer opens and shows a page with an introduction video. Under the video, you can find templates for common logic app workflow patterns.
+1. When deployment is complete, select **Go to resource**. The designer opens and shows a page with an introduction video. Under the video, you can find templates for common logic app workflow patterns.
1. Scroll past the introduction video and common triggers to **Templates**. Select a prebuilt template.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot of the designer. Under 'Templates,' three templates are visible. One called 'Delete old Azure blobs' is highlighted.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot showing the designer. Under 'Templates,' three templates are visible. The templated named 'Delete old Azure blobs' is selected.":::
When you select a prebuilt template, you can view more information about that template.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot that shows information about the 'Delete old Azure blobs' template, including a description and a diagram that shows a recurring schedule.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot showing information about the 'Delete old Azure blobs' template, including a description and a diagram that shows a recurring schedule.":::
-1. To continue with the selected template, select **Use this template**.
+1. To continue with the selected template, select **Use this template**.
1. Based on the connectors in the template, you're prompted to perform any of these steps:
- * Sign in with your credentials to systems or services that are referenced by the template.
+ - Sign in with your credentials to systems or services that are referenced by the template.
- * Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
+ - Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
- > [!NOTE]
- > Many templates include connectors that have required properties that are prepopulated. Other templates require that you provide values before you can properly deploy the logic app workflow. If you try to deploy without completing the missing property fields, you get an error message.
+ > [!NOTE]
+ >
+ > Many templates include connectors that have required properties that are prepopulated. Other templates
+ > require that you provide values before you can properly deploy the logic app resource. If you try to
+ > deploy without completing the missing property fields, you get an error message.
-1. After you set up your required connections, select **Continue**.
+1. After you set up the required connections, select **Continue**.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png" alt-text="Screenshot of the designer. A connection for Azure Blob Storage is visible, and the 'Continue' button is highlighted.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection.png" alt-text="Screenshot showing designer with connection to Azure Blob Storage. The 'Continue' button is selected.":::
- The designer opens and displays your logic app workflow.
+ The designer opens and displays your workflow.
> [!TIP]
- > To return to the template viewer, select **Templates** on the designer toolbar. This action discards any unsaved changes, so a warning message appears to confirm your request.
+ >
+ > To return to the template viewer, select **Templates** on the designer toolbar. This action
+ > discards any unsaved changes, so a warning message appears to confirm your request.
+
+1. Continue building your workflow.
+
+1. When you're ready, save your workflow, which automatically publishes your logic app resource live to Azure. On the designer toolbar, select **Save**.
-1. Continue building your logic app workflow.
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-save.png" alt-text="Screenshot showing the designer with top part of a workflow. On the toolbar, 'Save' is selected.":::
-## Update a logic app workflow with a template
+## Update a Consumption workflow with a template
-1. In the [Azure portal](https://portal.azure.com), go to your logic app resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Consumption logic app resource.
-1. On the logic app navigation menu, select **Logic app designer**.
+1. On the resource navigation menu, select **Logic app designer**.
1. On the designer toolbar, select **Templates**. This action discards any unsaved changes, so a warning message appears. To confirm that you want to continue, select **OK**.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-update-existing-with-template.png" alt-text="Screenshot of the designer. The top part of a logic app workflow is visible. On the toolbar, 'Templates' is highlighted.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-update-existing-with-template.png" alt-text="Screenshot showing the designer with top part of a workflow visible. On the toolbar, 'Templates' is selected.":::
1. Scroll past the introduction video and common triggers to **Templates**. Select a prebuilt template.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot of the designer. Under 'Templates,' three templates are visible. One template called 'Delete old Azure blobs' is highlighted.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/choose-logic-app-template.png" alt-text="Screenshot showing the template gallery. Under 'Templates,' three templates are visible. The template named 'Delete old Azure blobs' is selected.":::
When you select a prebuilt template, you can view more information about that template.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot that shows information about the 'Delete old Azure blobs' template. A description and diagram that shows a recurring schedule are visible.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-choose-prebuilt-template.png" alt-text="Screenshot showing information about the 'Delete old Azure blobs' template with a description and diagram that shows a recurring schedule.":::
-1. To continue with the selected template, select **Use this template**.
+1. To continue with the selected template, select **Use this template**.
1. Based on the connectors in the template, you're prompted to perform any of these steps:
- * Sign in with your credentials to systems or services that are referenced by the template.
+ - Sign in with your credentials to systems or services that are referenced by the template.
- * Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
+ - Create connections for any systems or services that are referenced by the template. To create a connection, provide a name for your connection, and if necessary, select the resource that you want to use.
> [!NOTE]
- > Many templates include connectors that have required properties that are prepopulated. Other templates require that you provide values before you can properly deploy the logic app workflow. If you try to deploy without completing the missing property fields, you get an error message.
+ >
+ > Many templates include connectors that have required properties that are prepopulated. Other templates
+ > require that you provide values before you can properly deploy the logic app resource. If you try to
+ > deploy without completing the missing property fields, you get an error message.
1. After you set up your required connections, select **Continue**.
- :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection-designer.png" alt-text="Screenshot of the designer, with a connection for Azure Blob Storage. The 'Continue' button is highlighted.":::
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-create-connection-designer.png" alt-text="Screenshot showing the designer with a connection to Azure Blob Storage. The 'Continue' button is selected.":::
- The designer opens and displays your logic app workflow.
+ The designer opens and displays your workflow.
-1. Continue building your logic app workflow.
+1. Continue building your workflow.
> [!TIP]
- > If you haven't saved your changes, you can discard your work and return to your previous workflow. On the designer toolbar, select **Discard**.
-
-## Deploy a logic app workflow built from a template
-
-After you make your changes to the template, you can save your changes. This action automatically publishes your logic app workflow.
-
-On the designer toolbar, select **Save**.
-
+ >
+ > If you haven't saved your changes, you can discard your work and return to
+ > your previous workflow. On the designer toolbar, select **Discard**.
-## Get support
+1. When you're ready, save your workflow, which automatically publishes your logic app resource live to Azure. On the designer toolbar, select **Save**.
-* For questions, go to the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, go to the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
+ :::image type="content" source="./media/logic-apps-create-logic-apps-from-templates/logic-app-save.png" alt-text="Screenshot showing the designer with top part of a workflow visible. On the toolbar, 'Save' is selected.":::
## Next steps
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classi
```python model_name = 'heart-classifier' model = ml_client.models.create_or_update(
- Model(path='heart-classifier-mlflow/model', type=AssetTypes.MLFLOW_MODEL)
+ Model(name=model_name, path='heart-classifier-mlflow/model', type=AssetTypes.MLFLOW_MODEL)
) ```
+> [!NOTE]
+> The model used in this tutorial is an MLflow model. However, the steps apply for both MLflow models and custom models.
+ ### Creating a scoring script We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
Batch Endpoint can only deploy registered models so we need to register it. You
```python model_name = 'imagenet-classifier' model = ml_client.models.create_or_update(
- Model(path=model_path, type=AssetTypes.CUSTOM_MODEL)
+ Model(name=model_name, path=model_path, type=AssetTypes.CUSTOM_MODEL)
) ```
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
For testing our endpoint, we are going to use a sample of unlabeled data located
name: heart-dataset-unlabeled description: An unlabeled dataset for heart classification. type: uri_folder
- path: heart-dataset
+ path: heart-classifier-mlflow/data
``` Then, create the data asset:
For testing our endpoint, we are going to use a sample of unlabeled data located
# [Azure ML SDK for Python](#tab/sdk) ```python
- data_path = "resources/heart-dataset/"
+ data_path = "heart-classifier-mlflow/data"
dataset_name = "heart-dataset-unlabeled" heart_dataset_unlabeled = Data(
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md
az ml model create --name $MODEL_NAME --type "custom_model" --path "bart-text-su
```python model_name = 'bart-text-summarization' model = ml_client.models.create_or_update(
- Model(path='bart-text-summarization/model', type=AssetTypes.CUSTOM_MODEL)
+ Model(name=model_name, path='bart-text-summarization/model', type=AssetTypes.CUSTOM_MODEL)
) ```
machine-learning How To Use Low Priority Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-low-priority-batch.md
Azure Machine Learning Batch Deployments provides several capabilities that make
- Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible. - Batch deployment jobs automatically seek the target number of VMs in the available compute cluster based on the number of tasks to submit. If VMs are preempted or unavailable, batch deployment jobs attempt to replace the lost capacity by queuing the failed tasks to the cluster.-- When a job is interrupted, it is resubmitted to run again. Rescheduling is done at job level, regardless of the progress. No checkpointing capability is provided.
+- When a job is interrupted, it is resubmitted to run again. Rescheduling is done at the mini batch level, regardless of the progress. No checkpointing capability is provided.
- Low priority VMs have a separate vCPU quota that differs from the one for dedicated VMs. Low-priority cores per region have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families. See [Azure Machine Learning compute quotas](../how-to-manage-quotas.md#azure-machine-learning-compute). ## Considerations and use cases
To view these metrics in the Azure portal
## Limitations - Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.-- Rescheduling is done at the job level, regardless of the progress. No checkpointing capability is provided.
+- Rescheduling is done at the mini-batch level, regardless of the progress. No checkpointing capability is provided.
+
+> [!WARNING]
+> In the cases where the entire cluster is preempted (or running on a single-node cluster), the job will be cancelled as there is no capacity available for it to run. Resubmitting will be required in this case.
+
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
-- Previously updated : 09/22/2021+++ Last updated : 10/19/2022 #Customer intent: As a data scientist, I want to know what a compute instance is and how to use it for Azure Machine Learning.
Python packages are all installed in the **Python 3.8 - AzureML** environment. C
## Accessing files
-Notebooks and R scripts are stored in the default storage account of your workspace in Azure file share. These files are located under your ΓÇ£User filesΓÇ¥ directory. This storage makes it easy to share notebooks between compute instances. The storage account also keeps your notebooks safely preserved when you stop or delete a compute instance.
+Notebooks and Python scripts are stored in the default storage account of your workspace in Azure file share. These files are located under your ΓÇ£User filesΓÇ¥ directory. This storage makes it easy to share notebooks between compute instances. The storage account also keeps your notebooks safely preserved when you stop or delete a compute instance.
The Azure file share account of your workspace is mounted as a drive on the compute instance. This drive is the default working directory for Jupyter, Jupyter Labs, and RStudio. This means that the notebooks and other files you create in Jupyter, JupyterLab, or RStudio are automatically stored on the file share and available to use in other compute instances as well.
Writing small files can be slower on network drives than writing to the compute
Do not store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, do not write very large files of data on the OS disk of the compute instance. OS disk on compute instance has 128 GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is configurable based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores). Any software packages you install are saved on the OS disk of compute instance. Please note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
-### Create
+## Create
+
+Follow the steps in the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md) to create a basic compute instance.
+
+For more options, see [create a new compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create).
As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#create-on-behalf-of-preview)**. You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance.
-To create a compute instance for yourself, use your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
-
-You can also create an instance
-* Directly from the [integrated notebooks experience](tutorial-train-deploy-notebook.md#azure)
-* In Azure portal
+Other ways to create a compute instance:
+* Directly from the integrated notebooks experience.
* From Azure Resource Manager template. For an example template, see the [create an Azure Machine Learning compute instance template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance). * With [Azure Machine Learning SDK](how-to-create-manage-compute-instance.md?tabs=python#create) * From the [CLI extension for Azure Machine Learning](how-to-create-manage-compute-instance.md?tabs=azure-cli#create)
Compute instance comes with P10 OS disk. Temp disk type depends on the VM size c
## Compute target
-Compute instances can be used as a [training compute target](concept-compute-target.md#train) similar to Azure Machine Learning [compute training clusters](how-to-create-attach-compute-cluster.md). But a compute instance has only a single node, while a compute cluster can have more nodes.
+Compute instances can be used as a [training compute target](concept-compute-target.md#training-compute-targets) similar to Azure Machine Learning [compute training clusters](how-to-create-attach-compute-cluster.md). But a compute instance has only a single node, while a compute cluster can have more nodes.
A compute instance:
You can use compute instance as a local inferencing deployment target for test/d
## Next steps
-* [Create and manage a compute instance](how-to-create-manage-compute-instance.md)
+* [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
* [Tutorial: Train your first ML model](tutorial-1st-experiment-sdk-train.md) shows how to use a compute instance with an integrated notebook.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
-- Previously updated : 10/21/2021+++ Last updated : 10/19/2022 #Customer intent: As a data scientist, I want to understand what a compute target is and why I need it.
A *compute target* is a designated compute resource or environment where you run
In a typical model development lifecycle, you might: 1. Start by developing and experimenting on a small amount of data. At this stage, use your local environment, such as a local computer or cloud-based virtual machine (VM), as your compute target.
-1. Scale up to larger data, or do [distributed training](how-to-train-distributed-gpu.md) by using one of these [training compute targets](#train).
-1. After your model is ready, deploy it to a web hosting environment with one of these [deployment compute targets](#deploy).
+1. Scale up to larger data, or do [distributed training](how-to-train-distributed-gpu.md) by using one of these [training compute targets](#training-compute-targets).
+1. After your model is ready, deploy it to a web hosting environment with one of these [deployment compute targets](#compute-targets-for-inference).
The compute resources you use for your compute targets are attached to a [workspace](concept-workspace.md). Compute resources other than the local machine are shared by users of the workspace.
-## <a name="train"></a> Training compute targets
+## Training compute targets
Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a job. You can also attach your own compute resource, although support for different scenarios might vary. [!INCLUDE [aml-compute-target-train](../../includes/aml-compute-target-train.md)]
-## <a name="deploy"></a> Compute targets for inference
+## Compute targets for inference
When performing inference, Azure Machine Learning creates a Docker container that hosts the model and associated resources needed to use it. This container is then used in a compute target.
When performing inference, Azure Machine Learning creates a Docker container tha
Learn [where and how to deploy your model to a compute target](how-to-deploy-managed-online-endpoints.md).
-<a name="amlcompute"></a>
## Azure Machine Learning compute (managed) A managed compute resource is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads. Azure Machine Learning compute clusters and [compute instances](concept-compute-instance.md) are the only managed computes.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Previously updated : 10/21/2021 Last updated : 10/20/2022 # Data encryption with Azure Machine Learning
machine-learning Concept Distributed Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-distributed-training.md
description: Learn what type of distributed training Azure Machine Learning supports and the open source framework integrations available for distributed training. --+++ Last updated 03/27/2020
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
--+++ Last updated 09/23/2021
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
---+++ Last updated 10/21/2021
machine-learning Concept Secure Code Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-code-best-practice.md
--+++ Last updated 10/21/2021
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
description: Learn how to train models with Azure Machine Learning. Explore the different training methods and choose the right one for your project. --+++ Last updated 08/30/2022
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
--+++ Last updated 04/05/2022
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
description: Learn how Azure Machine Learning manages vulnerabilities in images
Previously updated : 12/16/2021 Last updated : 10/20/2022
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
--+++ Last updated 08/26/2022 #Customer intent: As a data scientist, I want to understand the purpose of a workspace for Azure Machine Learning.
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
Previously updated : 10/21/2021 Last updated : 10/20/2022
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
Title: Develop with AutoML & Azure Databricks
description: Learn to set up a development environment in Azure Machine Learning and Azure Databricks. Use the Azure ML SDKs for Databricks and Databricks with AutoML. --+++ - Last updated 10/21/2021
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
Previously updated : 03/22/2021 Last updated : 10/20/2022
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
--++ Previously updated : 09/21/2022 Last updated : 10/19/2022 # Create an Azure Machine Learning compute cluster
The dedicated cores per region per VM family quota and total regional quota, whi
[!INCLUDE [min-nodes-note](../../includes/machine-learning-min-nodes.md)] The compute autoscales down to zero nodes when it isn't used. Dedicated VMs are created to run your jobs as needed.+
+The fastest way to create a compute cluster is to follow the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
+
+Or use the following examples to create a compute cluster with more options:
# [Python SDK](#tab/python)
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
Title: Manage training & deploy computes (studio)
description: Use studio to manage training and deployment compute resources (compute targets) for machine learning. --++
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Previously updated : 09/21/2022 Last updated : 10/19/2022 # Create and manage an Azure Machine Learning compute instance
Last updated 09/21/2022
> * [v1](v1/how-to-create-manage-compute-instance.md) > * [v2 (current version)](how-to-create-manage-compute-instance.md)
-Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
+Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
-Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#train). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
+Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#training-compute-targets). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
In this article, you learn how to:
Creating a compute instance is a one time process for your workspace. You can re
The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you'll be able to restart the compute instance. It isn't possible to change the virtual machine size of compute instance once it's created.
-The following example demonstrates how to create a compute instance:
+The fastest way to create a compute instance is to follow the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md).
+
+Or use the following examples to create a compute instance with more options:
# [Python SDK](#tab/python)
You can also create your own custom Azure policy. For example, if the below poli
} ``` - ## Create on behalf of (preview) As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
Title: Set up text labeling project description: Create a project to label text using the data labeling tool. Specify either a single label or multiple labels to be applied to each piece of text.--+++
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
Last updated 05/04/2022
Use a setup script for an automated way to customize and configure a compute instance at provisioning time.
-Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#train) or for an [inference target](concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
+Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#training-compute-targets) or for an [inference target](concept-compute-target.md#compute-targets-for-inference). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
machine-learning How To Integrate Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md
Title: Audit and manage Azure Machine Learning description: Learn how to use Azure Policy to use built-in policies for Azure Machine Learning to make sure your workspaces are compliant with your requirements.-- Previously updated : 11/30/2021++ Last updated : 10/20/2022
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-label-data.md
Title: Labeling images and text documents title.suffix: Azure Machine Learning description: Use data labeling tools to rapidly label text or label images for a Machine Learning in a data labeling project.--+++
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
--+++ Last updated 09/27/2022-
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
Alternatively, use the `> Azure ML: Create Dataset` command in the command palet
1. Expand your workspace node. 1. Expand the **Datasets** node. 1. Right-click the dataset you want to:
- - **View Dataset Properties**. Lets you view metadata associated with a specific dataset. If you have multiple version of a dataset, you can choose to only view the dataset properties of a specific version by expanding the dataset node and performing the same steps described in this section on the version of interest.
+ - **View Dataset Properties**. Lets you view metadata associated with a specific dataset. If you have multiple versions of a dataset, you can choose to only view the dataset properties of a specific version by expanding the dataset node and performing the same steps described in this section on the version of interest.
- **Preview dataset**. View your dataset directly in the VS Code Data Viewer. Note that this option is only available for tabular datasets. - **Unregister dataset**. Removes a dataset and all versions of it from your workspace.
Alternatively, use the `Azure ML: Delete Compute instance` command in the comman
## Compute clusters
-For more information, see [training compute targets](concept-compute-target.md#train).
+For more information, see [training compute targets](concept-compute-target.md#training-compute-targets).
### Create compute cluster
Alternatively, use the `> Azure ML: Remove Compute` command in the command palet
## Inference Clusters
-For more information, see [compute targets for inference](concept-compute-target.md#deploy).
+For more information, see [compute targets for inference](concept-compute-target.md#compute-targets-for-inference).
### Manage inference clusters
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
description: Learn how to manage Azure Machine Learning workspaces in the Azure
--+++ Last updated 09/21/2022
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Title: Distributed GPU training guide description: Learn the best practices for performing distributed training with Azure Machine Learning supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, PyTorch Lightning, Hugging Face Transformers, TensorFlow, and InfiniBand.--++
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
Title: Train with MLflow Projects
+ Title: Train with MLflow Projects (Preview)
description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from ML models --+++ Last updated 06/16/2021
-# Train ML models with MLflow Projects and Azure Machine Learning
+# Train ML models with MLflow Projects and Azure Machine Learning (Preview)
In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Title: MLflow Tracking for Azure Databricks ML experiments
description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from Azure Databricks ML experiments. --++ -+ Last updated 07/01/2022
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
--++ Last updated 03/31/2022-+ # CLI (v2) core YAML syntax
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
--++ Last updated 08/08/2022-+ # CLI (v2) command job YAML schema
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
Azure Machine Learning introduces two fully managed cloud-based virtual machines
* **Compute clusters**: Compute clusters are a cluster of VMs with multi-node scaling capabilities. Compute clusters are better suited for compute targets for large jobs and production. The cluster scales up automatically when a job is submitted. Use as a training compute target or for dev/test deployment.
-For more information about training compute targets, see [Training compute targets](../concept-compute-target.md#train). For more information about deployment compute targets, see [Deployment targets](../concept-compute-target.md#deploy).
+For more information about training compute targets, see [Training compute targets](../concept-compute-target.md#training-compute-targets). For more information about deployment compute targets, see [Deployment targets](../concept-compute-target.md#compute-targets-for-inference).
## Datasets and datastores
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
Next determine where the model will be trained. An automated ML training experim
* **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
- * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#amlcompute) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
+ * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#azure-machine-learning-compute-managed) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
* An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](../how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-machine-learning-pipelines.md
output_data_dataset = output_data1.register_on_complete(name = 'prepared_output_
## Set up a compute target
-In Azure Machine Learning, the term __compute__ (or __compute target__) refers to the machines or clusters that do the computational steps in your machine learning pipeline. See [compute targets for model training](../concept-compute-target.md#train) for a full list of compute targets and [Create compute targets](../how-to-create-attach-compute-studio.md) for how to create and attach them to your workspace. The process for creating and or attaching a compute target is the same whether you're training a model or running a pipeline step. After you create and attach your compute target, use the `ComputeTarget` object in your [pipeline step](#steps).
+In Azure Machine Learning, the term __compute__ (or __compute target__) refers to the machines or clusters that do the computational steps in your machine learning pipeline. See [compute targets for model training](../concept-compute-target.md#training-compute-targets) for a full list of compute targets and [Create compute targets](../how-to-create-attach-compute-studio.md) for how to create and attach them to your workspace. The process for creating and or attaching a compute target is the same whether you're training a model or running a pipeline step. After you create and attach your compute target, use the `ComputeTarget` object in your [pipeline step](#steps).
> [!IMPORTANT] > Performing management operations on compute targets isn't supported from inside remote jobs. Since machine learning pipelines are submitted as a remote job, do not use management operations on compute targets from inside the pipeline.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
Last updated 05/02/2022
Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
-Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](../concept-compute-target.md#train) or for an [inference target](../concept-compute-target.md#deploy). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
+Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](../concept-compute-target.md#training-compute-targets) or for an [inference target](../concept-compute-target.md#compute-targets-for-inference). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
Compute instances can run jobs securely in a [virtual network environment](../how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md
Title: Distributed GPU training guide (SDK v1) description: Learn the best practices for performing distributed training with Azure Machine Learning SDK (v1) supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, PyTorch Lightning, Hugging Face Transformers, TensorFlow, and InfiniBand.--++
Make sure your code follows these tips:
* Your Azure ML environment contains DeepSpeed and its dependencies, Open MPI, and mpi4py. * Create an `MpiConfiguration` with your distribution.
-### DeepSeed example
+### DeepSpeed example
* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/deepspeed/cifar)
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
Title: Use software environments CLI v1 description: Create and manage environments for model training and deployment with CLI v1. Manage Python packages and other settings for the environment.---+++ Last updated 04/19/2022
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware VMs.
| **Supported operating systems** | You can migrate [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems that are supported by Azure. **Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration.
-**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x <br/> - Cent OS 8.x, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3<br/>- Ubuntu 20.04, 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 10, 9, 8, 7<br/> - Oracle Linux 8, 7.7-CI, 7.7, 6<br/> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
+**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 8, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.x <br/> - Cent OS 8.x, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3<br/>- Ubuntu 20.04, 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 10, 9, 8, 7<br/> - Oracle Linux 8, 7.7-CI, 7.7, 6<br/> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
**Boot requirements** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. **Disk size** | up to 2 TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
Configure this setting manually as follows:
Azure Migrate completes these actions automatically for these versions -- Red Hat Enterprise Linux 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.2, 7.0, 6.x (Azure Linux VM agent is also installed automatically during migration)
+- Red Hat Enterprise Linux 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1, 7.0, 6.x (Azure Linux VM agent is also installed automatically during migration)
- Cent OS 8.x, 7.7, 7.6, 7.5, 7.4, 6.x (Azure Linux VM agent is also installed automatically during migration) - SUSE Linux Enterprise Server 15 SP0, 15 SP1, 12, 11 SP4, 11 SP3 - Ubuntu 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS (Azure Linux VM agent is also installed automatically during migration)
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
ms. Previously updated : 07/28/2021 Last updated : 10/20/2022+ # Troubleshoot assessment
This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** |
-Unsupported boot type | Azure does not support UEFI boot type for VMs with the Operating Systems: Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2. Check list of Operating Systems that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure)
+Unsupported boot type | Azure does not support UEFI boot type for VMs with the Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2 operating systems. Check the list of operating systems that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure).
Conditionally supported Windows operating system | The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure. Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure.
-Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. For more information, see [this website](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
+Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. [Learn more](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
Unendorsed Linux OS | The server might start in Azure, but Azure provides no operating system support. Consider upgrading to an [endorsed Linux version](../virtual-machines/linux/endorsed-distros.md) before you migrate to Azure.
-Unknown operating system | The operating system of the VM was specified as "Other" in vCenter Server. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Make sure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
-Unsupported bit version | VMs with a 32-bit operating systems might boot in Azure, but we recommend that you upgrade to 64-bit before you migrate to Azure.
+Unknown operating system | The operating system of the VM was specified as **Other** in vCenter Server. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Ensure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
+Unsupported bit version | VMs with a 32-bit operating system might boot in Azure, but we recommend that you upgrade to 64-bit before you migrate to Azure.
Requires a Microsoft Visual Studio subscription | The server is running a Windows client operating system, which is supported only through a Visual Studio subscription.
-VM not found for the required storage performance | The storage performance (input/output operations per second [IOPS] and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration.
+VM not found for the required storage performance | The storage performance (input/output operations per second (IOPS) and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration.
VM not found for the required network performance | The network performance (in/out) required for the server exceeds Azure VM support. Reduce the networking requirements for the server. VM not found in the specified location | Use a different target location before migration.
-One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.<br><br> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br><br> For each disk attached to the VM, make sure that the size of the disk is <64 TB (supported by Ultra SSD disks).<br><br> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by Azure [managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
+One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.<br><br> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br><br> For each disk attached to the VM, make sure that the size of the disk is < 64 TB (supported by Ultra SSD disks).<br><br> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by [Azure managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits).
One or more unsuitable network adapters | Remove unused network adapters from the server before migration. Disk count exceeds limit | Remove unused disks from the server before migration.
-Disk size exceeds limit | Azure Migrate: Discovery and assessment supports disks with up to 64-TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
+Disk size exceeds limit | Azure Migrate: Discovery and assessment supports disks with up to 64 TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
Disk unavailable in the specified location | Make sure the disk is in your target location before you migrate. Disk unavailable for the specified redundancy | The disk should use the redundancy storage type defined in the assessment settings (LRS by default). Couldn't determine disk suitability because of an internal error | Try creating a new assessment for the group.
VM with required cores and memory not found | Azure couldn't find a suitable VM
Couldn't determine VM suitability because of an internal error | Try creating a new assessment for the group. Couldn't determine suitability for one or more disks because of an internal error | Try creating a new assessment for the group. Couldn't determine suitability for one or more network adapters because of an internal error | Try creating a new assessment for the group.
-No VM size found for offer currency Reserved Instance (RI) | Server marked "not suitable" because the VM size wasn't found for the selected combination of RI, offer, and currency. Edit the assessment properties to choose the valid combinations and recalculate the assessment.
+No VM size found for offer currency Reserved Instance (RI) | Server marked **not suitable** because the VM size wasn't found for the selected combination of RI, offer, and currency. Edit the assessment properties to choose the valid combinations and recalculate the assessment.
## Azure VMware Solution (AVS) assessment readiness issues
This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** | Unsupported IPv6 | Only applicable to Azure VMware Solution assessments. Azure VMware Solution doesn't support IPv6 internet addresses. Contact the Azure VMware Solution team for remediation guidance if your server is detected with IPv6.
-Unsupported OS | Support for certain Operating System versions has been deprecated by VMware and the assessment recommends you to upgrade the operating system before migrating to Azure VMware Solution. [Learn more](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software)
+Unsupported OS | Support for certain Operating System versions has been deprecated by VMware and the assessment recommends you to upgrade the operating system before migrating to Azure VMware Solution. [Learn more](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software).
## Suggested migration tool in an import-based Azure VMware Solution assessment is unknown
For servers imported via a CSV file, the default migration tool in an Azure VMwa
## Linux VMs are "conditionally ready" in an Azure VM assessment
-In the case of VMware and Hyper-V VMs, an Azure VM assessment marks Linux VMs as "conditionally ready" because of a known gap.
+In the case of VMware and Hyper-V VMs, an Azure VM assessment marks Linux VMs as **conditionally ready** because of a known gap.
- The gap prevents it from detecting the minor version of the Linux OS installed on the on-premises VMs. - For example, for RHEL 6.10, currently an Azure VM assessment detects only RHEL 6 as the OS version. This behavior occurs because the vCenter Server and the Hyper-V host don't provide the kernel version for Linux VM operating systems.-- Because Azure endorses only specific versions of Linux, the Linux VMs are currently marked as "conditionally ready" in an Azure VM assessment.
+- Since Azure endorses only specific versions of Linux, the Linux VMs are currently marked as **conditionally ready** in an Azure VM assessment.
- You can determine whether the Linux OS running on the on-premises VM is endorsed in Azure by reviewing [Azure Linux support](../virtual-machines/linux/endorsed-distros.md). - After you've verified the endorsed distribution, you can ignore this warning.
This gap can be addressed by enabling [application discovery](./how-to-discover-
## Operating system version not available
-For physical servers, the operating system minor version information should be available. If it isn't available, contact Microsoft Support. For servers in a VMware environment, Azure Migrate uses the operating system information specified for the VM in vCenter Server. But vCenter Server doesn't provide the minor version for operating systems. To discover the minor version, set up [application discovery](./how-to-discover-applications.md). For Hyper-V VMs, operating system minor version discovery isn't supported.
+For physical servers, the operating system minor version information should be available. If it isn't available, contact Microsoft Support. For servers in a VMware environment, Azure Migrate uses the operating system information specified for the VM in the vCenter Server. But vCenter Server doesn't provide the minor version for operating systems. To discover the minor version, set up [application discovery](./how-to-discover-applications.md). For Hyper-V VMs, operating system minor version discovery isn't supported.
## Azure SKUs bigger than on-premises in an Azure VM assessment
Let's look at an example recommendation:
We have an on-premises VM with 4 cores and 8 GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3. - If the assessment is **As on-premises**, an Azure VM SKU with 4 cores and 8 GB of memory is recommended.-- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8-GB memory * 1.3 = 5.3-GB memory), the cheapest VM SKU of 4 cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.
+- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8 GB memory * 1.3 = 5.2 GB memory), the cheapest VM SKU of 4 cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.
- [Learn more](concepts-assessment-calculation.md#types-of-assessments) about assessment sizing. ## Why is the recommended Azure disk SKU bigger than on-premises in an Azure VM assessment?
For **Performance-based** assessment, the assessment report export says 'Percent
- If the VMs are powered on for the duration for which you're creating the assessment. - If only memory counters are missing and you're trying to assess Hyper-V VMs, check if you have dynamic memory enabled on these VMs. Because of a known issue, currently the Azure Migrate appliance can't collect memory utilization for such VMs.-- If all of the performance counters are missing, ensure the port access requirements for assessment are met. Learn more about the port access requirements for [VMware](./migrate-support-matrix-vmware.md#port-access-requirements), [Hyper-V](./migrate-support-matrix-hyper-v.md#port-access), and [physical](./migrate-support-matrix-physical.md#port-access) assessment.
+- If all of the performance counters are missing, ensure the port access requirements for assessment are met. Learn more about the port access requirements for [VMware](./migrate-support-matrix-vmware.md#port-access-requirements), [Hyper-V](./migrate-support-matrix-hyper-v.md#port-access), and [physical](./migrate-support-matrix-physical.md#port-access) assessments.
If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly. ## Why is performance data missing for some or all servers in my Azure VM or Azure VMware Solution assessment report?
-For **Performance-based** assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises servers. Make sure to check:
+For **Performance-based** assessment, the assessment report export says **PercentageOfCoresUtilizedMissing** or **PercentageOfMemoryUtilizedMissing** when the Azure Migrate appliance can't collect performance data for the on-premises servers. Make sure to check:
- If the servers are powered on for the duration for which you're creating the assessment. - If only memory counters are missing and you're trying to assess servers in a Hyper-V environment. In this scenario, enable dynamic memory on the servers and recalculate the assessment to reflect the latest changes. The appliance can collect memory utilization values for servers in a Hyper-V environment only when the server has dynamic memory enabled.
To ensure performance data is collected, make sure to check:
- If the SQL servers are powered on for the duration for which you're creating the assessment. - If the connection status of the SQL agent in Azure Migrate is **Connected**, and also check the last heartbeat. - If the Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.-- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
+- If all of the performance counters are missing, ensure that outbound connections on port 443 (HTTPS) are allowed.
If any of the performance counters are missing, the Azure SQL assessment recommends the smallest Azure SQL configuration for that instance or database.
If any of the performance counters are missing, the Azure SQL assessment recomme
The confidence rating is calculated for **Performance-based** assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. An assessment could get a low confidence rating for the following reasons: - You didn't profile your environment for the duration for which you're creating the assessment. For example, if you're creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you can't wait for the duration, change the performance duration to a shorter period and recalculate the assessment.-- Assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
+- The assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
- Servers are powered on for the duration of the assessment. - Outbound connections on ports 443 are allowed. - For Hyper-V Servers, dynamic memory is enabled.
- - The connection status of agents in Azure Migrate is "Connected." Also check the last heartbeat.
- - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance pane.
+ - The connection status of agents in Azure Migrate is **Connected**. Also check the last heartbeat.
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.
Recalculate the assessment to reflect the latest changes in confidence rating.
The confidence rating is calculated for **Performance-based** assessments based
## Why is my RAM utilization greater than 100%?
-By design, in Hyper-V if maximum memory provisioned is less than what is required by the VM, Assessment will show memory utilization to be more than 100%.
+By design, in Hyper-V if maximum memory provisioned is less than what is required by the VM, the assessment will show memory utilization to be more than 100%.
## Is the operating system license included in an Azure VM assessment?
An Azure VM assessment currently considers the operating system license cost onl
## How does performance-based sizing work in an Azure VM assessment?
-An Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn how](concepts-assessment-calculation.md#calculate-sizing-performance-based) performance-based data is collected.
+An Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn more](concepts-assessment-calculation.md#calculate-sizing-performance-based) about how performance-based data is collected.
## Can I migrate my disks to an Ultra disk by using Azure Migrate?
-No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. Find steps to deploy an Ultra disk at [this website](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk).
+No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. [Learn more](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk) about deploying an Ultra disk.
## Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput? As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra disk is billed based on the provisioned size, provisioned IOPS, and provisioned throughput. For example, if you provisioned a 200-GiB Ultra disk with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB. You'll be billed for 256 GiB, 20,000 IOPS, and 1,000 MB/second for 20 hours.
-IOPS to be provisioned = (Throughput discovered) *1024/256
+IOPS to be provisioned = (Throughput discovered) * 1024/256
## Does the Ultra disk recommendation consider latency?
No, currently only disk size, total throughput, and total IOPS are used for sizi
## I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says "No VM found for this location"
-This result is possible because not all VM sizes that support Ultra disk are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+This result is possible because not all VM sizes that support Ultra disks are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
## Why is my assessment showing a warning that it was created with an invalid offer?
Your assessment was created with an offer that is no longer valid and hence, the
## Why is my assessment showing a warning that it was created with a target Azure location that has been deprecated?
-Your assessment was created with an Azure region that has been deprecated and hence the **Edit** and **Recalculate** buttons are disabled. You can [create a new assessment](how-to-create-assessment.md) with any of the valid target locations. [Learn more.](concepts-assessment-calculation.md#whats-in-an-azure-vm-assessment)
+Your assessment was created with an Azure region that has been deprecated and hence the **Edit** and **Recalculate** buttons are disabled. You can [create a new assessment](how-to-create-assessment.md) with any of the valid target locations. [Learn more](concepts-assessment-calculation.md#whats-in-an-azure-vm-assessment).
## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime, and Discount (%)?
This issue can happen if the physical server has Hyper-V virtualization enabled.
## The recommended Azure VM SKU for my physical server is oversized
-This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual network adapters. As a result, the number of network adapters discovered is higher than the actual number. The Azure VM assessment picks an Azure VM that can support the required number of network adapters, which can potentially result in an oversized VM. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of number of network adapters on sizing. This product gap will be addressed going forward.
+This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual network adapters. As a result, the number of network adapters discovered is higher than the actual number. The Azure VM assessment picks an Azure VM that can support the required number of network adapters, which can potentially result in an oversized VM. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of the number of network adapters on sizing. This product gap will be addressed going forward.
## The readiness category is marked "Not ready" for my physical server
-The readiness category might be incorrectly marked as "Not ready" in the case of a physical server that has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of network adapters discovered is higher than the actual number. In both **As on-premises** and **Performance-based** assessments, the Azure VM assessment picks an Azure VM that can support the required number of network adapters. If the number of network adapters is discovered to be higher than 32, the maximum number of NICs supported on Azure VMs, the server will be marked "Not ready." [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of number of NICs on sizing.
+The readiness category might be incorrectly marked as **Not ready** in the case of a physical server that has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of network adapters discovered is higher than the actual number. In both **As on-premises** and **Performance-based** assessments, the Azure VM assessment picks an Azure VM that can support the required number of network adapters. If the number of network adapters is discovered to be higher than 32, the maximum number of NICs supported on Azure VMs, the server will be marked **Not ready**. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of number of NICs on sizing.
## The number of discovered NICs is higher than actual for physical servers
To collect network traffic logs:
## Where is the Operating System data in my assessment discovered from? -- For VMware VMs, by default, it's the Operating System data provided by the vCenter Server.
- - For VMware Linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details are in the assessment, go to the **Discovered servers** view, and mouse over the value in the **Operating system** column. In the text that pops up, you'd be able to see whether the OS data you see is gathered from the vCenter Server or from the guest VM by using the VM credentials.
+- For VMware VMs, by default, it's the operating system data provided by the vCenter Server.
+ - For VMware Linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details are in the assessment, go to the **Discovered servers** view, and hover over the value in the **Operating system** column. In the text that pops up, you'd be able to see whether the OS data you see is gathered from the vCenter Server or from the guest VM by using the VM credentials.
- For Windows VMs, the operating system details are always fetched from the vCenter Server.-- For Hyper-V VMs, the Operating System data is gathered from the Hyper-V host.
+- For Hyper-V VMs, the operating system data is gathered from the Hyper-V host.
- For physical servers, it is fetched from the server. ## Common web apps discovery errors
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
This feature will enable customers to perform in-place upgrades of their MySQL 5
>[!Important] > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
- > Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
- > [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) with values NO_AUTO_CREATE_USER, NO_FIELD_OPTIONS, NO_KEY_OPTIONS and NO_TABLE_OPTIONS are no longer supported in MySQL 8.0.
+ > Verify deprecated [sql_mode](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
+ > [sql_mode](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) with values NO_AUTO_CREATE_USER, NO_FIELD_OPTIONS, NO_KEY_OPTIONS and NO_TABLE_OPTIONS are no longer supported in MySQL 8.0.
:::image type="content" source="./media/how-to-upgrade/1-how-to-upgrade.png" alt-text="Screenshot showing Azure Database for MySQL Upgrade.":::
Follow these steps to perform major version upgrade for your Azure Database of M
2. From the Overview page, select the Upgrade button in the toolbar. >[!Important] > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
->Verify deprecated [sql_mode](/https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
+>Verify deprecated [sql_mode](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
3. In the Upgrade section, select Upgrade button to upgrade Azure database for MySQL 5.7 read replica server to 8.0 server.
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
Title: Tutorial - Monitor network communication using the Azure portal using Virtual machine scale set
-description: In this tutorial, learn how to monitor network communication between two virtual machine scale sets with Azure Network Watcher's connection monitor capability.
+ Title: Tutorial - Monitor network communication between two virtual machine scale sets by using the Azure portal
+description: In this tutorial, you'll learn how to monitor network communication between two virtual machine scale sets by using the Azure Network Watcher connection monitor capability.
documentationcenter: na-+ editor: '' tags: azure-resource-manager
-# Customer intent: I need to monitor communication between a virtual machine scale set and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
+# Customer intent: I need to monitor communication between a virtual machine scale set and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
na-+ Last updated 05/24/2022
-# Tutorial: Monitor network communication between two virtual machine scale sets using the Azure portal
+# Tutorial: Monitor network communication between two virtual machine scale sets by using the Azure portal
+
+Successful communication between a virtual machine scale set and an endpoint, such as another virtual machine (VM), can be critical for your organization. Sometimes, the introduction of configuration changes can break communication. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual machine scale set and a VM.
+> * Monitor communication between a scale set and a VM by using Connection Monitor.
+> * Generate alerts on Connection Monitor metrics.
+> * Diagnose a communication problem between two VMs, and learn how to resolve it.
> [!NOTE]
-> This tutorial covers Connection Monitor. Try the new and improved [Connection Monitor](connection-monitor-overview.md) to experience enhanced connectivity monitoring
+> This tutorial uses Connection Monitor. To experience enhanced connectivity monitoring, try the updated version of [Connection Monitor](connection-monitor-overview.md).
> [!IMPORTANT]
-> Starting 1 July 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the new Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before 29 February 2024.
+> As of July 1, 2021, you can't add new connection monitors in Connection Monitor (classic) but you can continue to use earlier versions that were created prior to that date. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the latest Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before February 29, 2024.
-Successful communication between a virtual machine scale set (VMSS) and an endpoint such as another VM, can be critical for your organization. Sometimes, configuration changes are introduced which can break communication. In this tutorial, you learn how to:
-> [!div class="checklist"]
-> * Create a virtual machine scale set and a VM
-> * Monitor communication between virtual machine scale set and VM with Connection Monitor
-> * Generate alerts on Connection Monitor metrics
-> * Diagnose a communication problem between two VMs, and learn how you can resolve it
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+Before you begin, if you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com).
## Create a virtual machine scale set
-Create a virtual machine scale set.
+In the following sections, you create a virtual machine scale set.
-## Create a load balancer
+### Create a load balancer
-Azure [load balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
+[Azure Load Balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
-First, create a public Standard Load Balancer by using the portal. The name and public IP address you create are automatically configured as the load balancer's front end.
+First, create a public standard load balancer by using the Azure portal. The name and public IP address you create are automatically configured as the load balancer's front end.
-1. In the search box, type **load balancer**. Under **Marketplace** in the search results, pick **Load balancer**.
-1. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
+1. In the search box, type **load balancer** and then, under **Marketplace** in the search results, select **Load balancer**.
+1. On the **Basics** pane of the **Create load balancer** page, do the following:
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and type *myVMSSResourceGroup* in the text box.|
- | Name | *myLoadBalancer* |
- | Region | Select **East US**. |
- | Type | Select **Public**. |
- | SKU | Select **Standard**. |
- | Public IP address | Select **Create new**. |
- | Public IP address name | *myPip* |
- | Assignment| Static |
- | Availability zone | Select **Zone-redundant**. |
+ | Setting | Value |
+ | | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new** and then, in the box, type **myVMSSResourceGroup**.|
+ | Name | Enter **myLoadBalancer**. |
+ | Region | Select **East US**. |
+ | Type | Select **Public**. |
+ | SKU | Select **Standard**. |
+ | Public IP address | Select **Create new**. |
+ | Public IP address name | Enter **myPip**. |
+ | Assignment| Select **Static**. |
+ | Availability zone | Select **Zone-redundant**. |
-1. When you are done, select **Review + create**
+1. Select **Review + create**.
1. After it passes validation, select **Create**.
-## Create virtual machine scale set
+### Create a virtual machine scale set
You can deploy a scale set with a Windows Server image or Linux images such as RHEL, CentOS, Ubuntu, or SLES.
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**. Select **Create** on the **Virtual machine scale sets** page, which will open the **Create a virtual machine scale set** page.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from the resource group list.
-1. Type *myScaleSet* as the name for your scale set.
-1. In **Region**, select a region that is close to your area.
-1. Under **Orchestration**, ensure the *Uniform* option is selected for **Orchestration mode**.
-1. Select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*.
-1. Enter your desired username, and select which authentication type you prefer.
- - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lowercase character, one uppercase character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
- - If you select a Linux OS disk image, you can instead choose **SSH public key**. Only provide your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
-
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**.
+1. On the **Virtual machine scale sets** pane, select **Create**.
+
+ The **Create a virtual machine scale set** page opens.
+1. On the **Basics** pane, under **Project details**, ensure that the correct subscription is selected, and then select **myVMSSResourceGroup** in the resource group list.
+1. For **Name**, type **myScaleSet**.
+1. For **Region**, select a region that's close to your area.
+1. Under **Orchestration**, for **Orchestration mode**, ensure that the **Uniform** option is selected.
+1. For **Image**, select a marketplace image. In this example, we've chosen *Ubuntu Server 18.04 LTS*.
+1. Enter your username, and then select the authentication type you prefer.
+ - A **Password** must be at least 12 characters long and contain three of the following: a lowercase character, an uppercase character, a number, and a special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
+ - If you select a Linux OS disk image, you can instead choose **SSH public key**. Provide only your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
-1. Select **Next** to move the other pages.
+1. Select **Next**.
1. Leave the defaults for the **Instance** and **Disks** pages. 1. On the **Networking** page, under **Load balancing**, select **Yes** to put the scale set instances behind a load balancer.
-1. In **Load balancing options**, select **Azure load balancer**.
-1. In **Select a load balancer**, select *myLoadBalancer* that you created earlier.
-1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
-1. When you are done, select **Review + create**.
+1. For **Load balancing options**, select **Azure load balancer**.
+1. For **Select a load balancer**, select **myLoadBalancer**, which you created earlier.
+1. For **Select a backend pool**, select **Create new**, type **myBackendPool**, and then select **Create**.
+1. When you're done, select **Review + create**.
1. After it passes validation, select **Create** to deploy the scale set.
-Once the scale set is created, follow the steps below to enable the Network Watcher extension in the scale set.
+After the scale set has been created, enable the Network Watcher extension in the scale set by doing the following:
-1. Under **Settings**, select **Extensions**. Select **Add extension**, and select **Network Watcher Agent for Windows**, as shown in the following picture:
+1. Under **Settings**, select **Extensions**.
+1. Select **Add extension**, and then select **Network Watcher Agent for Windows**, as shown in the following image:
-
-1. Under **Network Watcher Agent for Windows**, select **Create**, under **Install extension** select **OK**, and then under **Extensions**, select **OK**.
+ :::image type="content" source="./media/connection-monitor/nw-agent-extension.png" alt-text="Screenshot that shows the Network Watcher extension addition.":::
-
-### Create the VM
+1. Under **Network Watcher Agent for Windows**, select **Create**.
+1. Under **Install extension**, select **OK**.
+1. Under **Extensions**, select **OK**.
-Complete the steps in [create a VM](./connection-monitor.md#create-the-first-vm) again, with the following changes:
+## Create the VM
+
+Complete the steps in the "Create the first VM" section of [Tutorial: Monitor network communication between two virtual machines by using the Azure portal](./connection-monitor.md#create-the-first-vm), but **with the following changes**:
|Step|Setting|Value|
-||||
-| 1 | Select a version of the **Ubuntu Server** | |
-| 3 | Name | myVm2 |
-| 3 | Authentication type | Paste your SSH public key or select **Password**, and enter a password. |
-| 3 | Resource group | Select **Use existing** and select **myResourceGroup**. |
-| 6 | Extensions | **Network Watcher Agent for Linux** |
+|:|||
+| 1 | Select a version of **Ubuntu Server**. | |
+| 3 | Name | Enter **myVm2**. |
+| 3 | Authentication type | Paste your SSH public key or select **Password**, and then enter a password. |
+| 3 | Resource group | Select **Use existing**, and then select **myResourceGroup**. |
+| 6 | Extensions | Select **Network Watcher Agent for Linux**. |
-The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps.
+The VM takes a few minutes to deploy. Wait for it to finish deploying before you continue with the remaining steps.
## Create a connection monitor To create a monitor in Connection Monitor by using the Azure portal: 1. On the Azure portal home page, go to **Network Watcher**.
-1. In the left pane, in the **Monitoring** section, select **Connection monitor**.
+1. On the left pane, in the **Monitoring** section, select **Connection monitor**.
- You'll see all the connection monitors that were created in Connection Monitor. To see the connection monitors that were created in the classic Connection Monitor, go to the **Connection monitor** tab.
+ You'll see a list of the connection monitors that were created in Connection Monitor. To see the connection monitors that were created in the classic Connection Monitor, select the **Connection monitor** tab.
- :::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that shows connection monitors created in Connection Monitor.":::
-
-
-1. In the **Connection Monitor** dashboard, in the upper-left corner, select **Create**.
-
+ :::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that lists the connection monitors that were created in Connection Monitor.":::
+
+1. On the **Connection Monitor** dashboard, at the upper left, select **Create**.
+
+1. On the **Basics** pane, enter information for your connection monitor:
-1. On the **Basics** tab, enter information for your connection monitor:
- * **Connection Monitor Name**: Enter a name for your connection monitor. Use the standard naming rules for Azure resources.
- * **Subscription**: Select a subscription for your connection monitor.
- * **Region**: Select a region for your connection monitor. You can select only the source VMs that are created in this region.
- * **Workspace configuration**: Choose a custom workspace or the default workspace. Your workspace holds your monitoring data.
- * To use the default workspace, select the check box.
- * To choose a custom workspace, clear the check box. Then select the subscription and region for your custom workspace.
+ a. **Connection Monitor Name**: Enter a name for your connection monitor. Use the standard naming rules for Azure resources.
+ b. **Subscription**: Select a subscription for your connection monitor.
+ c. **Region**: Select a region for your connection monitor. You can select only the source VMs that are created in this region.
+ d. **Workspace configuration**: Your workspace holds your monitoring data. Do either of the following:
+ * To use the default workspace, select the checkbox.
+ * To choose a custom workspace, clear the checkbox, and then select the subscription and region for your custom workspace.
- :::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the Basics tab in Connection Monitor.":::
-
-1. At the bottom of the tab, select **Next: Test groups**.
+ :::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the 'Basics' pane in Connection Monitor.":::
+
+1. Select **Next: Test groups**.
+
+1. Add sources, destinations, and test configurations in your test groups. To learn about setting up test groups, see [Create test groups in Connection Monitor](#create-test-groups-in-a-connection-monitor).
-1. Add sources, destinations, and test configurations in your test groups. To learn about setting up your test groups, see [Create test groups in Connection Monitor](#create-test-groups-in-a-connection-monitor).
+ :::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the 'Test groups' pane in Connection Monitor.":::
- :::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the Test groups tab in Connection Monitor.":::
+1. At the bottom of the pane, select **Next: Create Alerts**. To learn about creating alerts, see [Create alerts in Connection Monitor](#create-alerts-in-connection-monitor).
-1. At the bottom of the tab, select **Next: Create Alerts**. To learn about creating alerts, see [Create alerts in Connection Monitor](#create-alerts-in-connection-monitor).
+ :::image type="content" source="./media/connection-monitor-2-preview/create-alert.png" alt-text="Screenshot that shows the 'Create alerts' pane.":::
- :::image type="content" source="./media/connection-monitor-2-preview/create-alert.png" alt-text="Screenshot that shows the Create alert tab.":::
+1. At the bottom of the pane, select **Next: Review + create**.
-1. At the bottom of the tab, select **Next: Review + create**.
+1. On the **Review + create** pane, review the basic information and test groups before you create the connection monitor. If you need to edit the connection monitor, you can do so by going back to the respective panes.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/review-create-cm.png" alt-text="Screenshot that shows the 'Review + create' pane in Connection Monitor.":::
-1. On the **Review + create** tab, review the basic information and test groups before you create the connection monitor. If you need to edit the connection monitor, you can do so by going back to the respective tabs.
- :::image type="content" source="./media/connection-monitor-2-preview/review-create-cm.png" alt-text="Screenshot that shows the Review + create tab in Connection Monitor.":::
> [!NOTE]
- > The **Review + create** tab shows the cost per month during the Connection Monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection Monitor becomes generally available, this column will show a monthly charge.
+ > The **Review + create** pane shows the cost per month during the Connection Monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection Monitor becomes generally available, this column will show a monthly charge.
> > Even during the Connection Monitor stage, Log Analytics ingestion charges apply.
-1. When you're ready to create the connection monitor, at the bottom of the **Review + create** tab, select **Create**.
+1. When you're ready to create the connection monitor, at the bottom of the **Review + create** pane, select **Create**.
Connection Monitor creates the connection monitor resource in the background. ## Create test groups in a connection monitor
- >[!NOTE]
- >> Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
+ > [!NOTE]
+ > Connection Monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
Each test group in a connection monitor includes sources and destinations that get tested on network parameters. They're tested for the percentage of checks that fail and the RTT over test configurations.
-In the Azure portal, to create a test group in a connection monitor, you specify values for the following fields:
+In the Azure portal, to create a test group in a connection monitor, do the following:
+
+1. **Disable test group**: You can select this checkbox to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default.
+1. **Name**: Name your test group.
+1. **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
-* **Disable test group**: You can select this check box to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default.
-* **Name**: Name your test group.
-* **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
* To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or virtual machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and virtual machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
-
- You can drill down from the **Subscription** level to other levels in the hierarchy:
+
+ You can drill down from the **Subscription** level to other levels in the hierarchy:
- **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
+ **Subscription** > **Resource group** > **Virtual network** > **Subnet** > **VMs with agents**
- You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **VNET** > **Subnet** > **VMs with agents**.
+ You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **Virtual network** > **Subnet** > **VMs with agents**.
- When you select a VNET, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ When you select a virtual network, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints including VMSS tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'VMSS' pane, in Connection Monitor.":::
* To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
-
- If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
-
- Under **Create Connection Monitor**, on the **Basics** tab, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
+
+ 1. If you need to add Network Performance Monitor to your workspace, get it from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
+
+ 1. Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
- :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection Monitor.":::
- * To choose recently used endpoints, you can use the **Recent endpoint** tab
-
- * You need not choose the endpoints with monitoring agents enabled only. You can select Azure or Non-Azure endpoints without the agent enabled and proceed with the creation of Connection Monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
- :::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor with unified enablement.":::
-
- * When you finish setting up sources, select **Done** at the bottom of the tab. You can still edit basic properties like the endpoint name by selecting the endpoint in the **Create Test Group** view.
+1. To choose recently used endpoints, you can use the **Recent endpoint** pane.
+
+ You need not choose the endpoints with monitoring agents enabled only. You can select Azure or non-Azure endpoints without the agent enabled and proceed with the creation of Connection Monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection Monitor with unified enablement.":::
+
+1. When you finish setting up sources, select **Done** at the bottom of the pane. You can still edit basic properties like the endpoint name by selecting the endpoint in the **Create Test Group** view.
-* **Destinations**: You can monitor connectivity to an Azure VM, an on-premises machine, or any endpoint (a public IP, URL, or FQDN) by specifying it as a destination. In a single test group, you can add Azure VMs, on-premises machines, Office 365 URLs, Dynamics 365 URLs, and custom endpoints.
+1. **Destinations**: You can monitor connectivity to an Azure VM, an on-premises machine, or any endpoint (a public IP, URL, or FQDN) by specifying it as a destination. In a single test group, you can add Azure VMs, on-premises machines, Office 365 URLs, Dynamics 365 URLs, and custom endpoints.
- * To choose Azure VMs as destinations, select the **Azure endpoints** tab. By default, the Azure VMs are grouped into a subscription hierarchy that's in the region that you selected under **Create Connection Monitor** on the **Basics** tab. You can change the region and choose Azure VMs from the new region. Then you can drill down from the **Subscription** level to other levels in the hierarchy, just as you can when you set the source Azure endpoints.
+ * To choose Azure VMs as destinations, select the **Azure endpoints** tab. By default, the Azure VMs are grouped into a subscription hierarchy that's in the region that you selected under **Create Connection Monitor** on the **Basics** pane. You can change the region and choose Azure VMs from the new region. Then you can drill down from the **Subscription** level to other levels in the hierarchy, just as you can when you set the source Azure endpoints.
- You can select VNETs, subnets, or single VMs, as you can when you set the source Azure endpoints. When you select a VNET, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet that have the Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ You can select virtual networks, subnets, or single VMs, as you can when you set the source Azure endpoints. When you select a virtual network, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet that have the Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests1.png" alt-text="<Screenshot that shows the Add Destinations pane and the Azure endpoints tab.>":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests1.png" alt-text="<Screenshot that shows the 'Add Destinations' pane and the 'Azure endpoints' pane.>":::
- :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests2.png" alt-text="<Screenshot that shows the Add Destinations pane at the Subscription level.>":::
-
-
- * To choose non-Azure agents as destinations, select the **Non-Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have Network Performance Monitor configured.
-
- If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
+ :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests2.png" alt-text="<Screenshot that shows the 'Add Destinations' pane at the Subscription level.>":::
+
+ * To choose non-Azure agents as destinations, select the **Non-Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have Network Performance Monitor configured.
+
+ If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](../azure-monitor/insights/solutions.md). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
- Under **Create Connection Monitor**, on the **Basics** tab, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created.
+ Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created.
- :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-dest.png" alt-text="Screenshot that shows the Add Destinations pane and the Non-Azure endpoints tab.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-dest.png" alt-text="Screenshot that shows the 'Add Destinations' pane and the 'Non-Azure endpoints' pane.":::
- * To choose public endpoints as destinations, select the **External Addresses** tab. The list of endpoints includes Office 365 test URLs and Dynamics 365 test URLs, grouped by name. You also can choose endpoints that were created in other test groups in the same connection monitor.
-
- To add an endpoint, in the upper-right corner, select **Add Endpoint**. Then provide an endpoint name and URL, IP, or FQDN.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-endpoints.png" alt-text="Screenshot that shows where to add public endpoints as destinations in Connection Monitor.":::
-
- * To choose recently used endpoints, go to the **Recent endpoint** tab.
- * When you finish choosing destinations, select **Done**. You can still edit basic properties like the endpoint name by selecting the endpoint in the **Create Test Group** view.
-
-* **Test configurations**: You can add one or more test configurations to a test group. Create a new test configuration by using the **New configuration** tab. Or add a test configuration from another test group in the same Connection Monitor from the **Choose existing** tab.
-
- * **Test configuration name**: Name the test configuration.
- * **Protocol**: Select **TCP**, **ICMP**, or **HTTP**. To change HTTP to HTTPS, select **HTTP** as the protocol and then select **443** as the port.
- * **Create TCP test configuration**: This check box appears only if you select **HTTP** in the **Protocol** list. Select this check box to create another test configuration that uses the same sources and destinations that you specified elsewhere in your configuration. The new test configuration is named **\<name of test configuration>_networkTestConfig**.
- * **Disable traceroute**: This check box applies when the protocol is TCP or ICMP. Select this box to stop sources from discovering topology and hop-by-hop RTT.
- * **Destination port**: You can provide a destination port of your choice.
- * **Listen on port**: This check box applies when the protocol is TCP. Select this check box to open the chosen TCP port if it's not already open.
- * **Test Frequency**: In this list, specify how frequently sources will ping destinations on the protocol and port that you specified. You can choose 30 seconds, 1 minute, 5 minutes, 15 minutes, or 30 minutes. Select **custom** to enter another frequency that's between 30 seconds and 30 minutes. Sources will test connectivity to destinations based on the value that you choose. For example, if you select 30 seconds, sources will check connectivity to the destination at least once every 30 seconds period.
- * **Success Threshold**: You can set thresholds on the following network parameters:
- * **Checks failed**: Set the percentage of checks that can fail when sources check connectivity to destinations by using the criteria that you specified. For the TCP or ICMP protocol, the percentage of failed checks can be equated to the percentage of packet loss. For HTTP protocol, this value represents the percentage of HTTP requests that received no response.
- * **Round trip time**: Set the RTT, in milliseconds, for how long sources can take to connect to the destination over the test configuration.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor.":::
-
-* **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or Non-Azure endpoints.
- * For selected Azure VMs or Azure virtual machine scale sets and Non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for Non-Azure endpoints will be auto enablement once the creation of Connection Monitor begins.
- * In case the virtual machine scale set selected is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with the virtual machine scale set as endpoints. In case the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
- * In the scenario mentioned above, user can consent to auto upgradation of the virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
-
- :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test groups and consent for auto-upgradation of VMSS in Connection Monitor.":::
+ * To choose public endpoints as destinations, select the **External Addresses** tab. The list of endpoints includes Office 365 test URLs and Dynamics 365 test URLs, grouped by name. You also can choose endpoints that were created in other test groups in the same connection monitor.
+
+ To add an endpoint, in the upper-right corner, select **Add Endpoint**. Then provide an endpoint name and URL, IP, or FQDN.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/add-endpoints.png" alt-text="Screenshot that shows where to add public endpoints as destinations in Connection Monitor.":::
+
+ * To choose recently used endpoints, go to the **Recent endpoint** pane.
+
+1. When you finish choosing destinations, select **Done**. You can still edit basic properties, such as the endpoint name, by selecting the endpoint in the **Create Test Group** view.
+
+1. **Test configurations**: You can add one or more test configurations to a test group. Create a new test configuration by using the **New configuration** pane. Or add a test configuration from another test group in the same Connection Monitor from the **Choose existing** pane.
+
+ a. **Test configuration name**: Name the test configuration.
+ b. **Protocol**: Select **TCP**, **ICMP**, or **HTTP**. To change HTTP to HTTPS, select **HTTP** as the protocol, and then select **443** as the port.
+ c. **Create TCP test configuration**: This checkbox appears only if you select **HTTP** in the **Protocol** list. Select this checkbox to create another test configuration that uses the same sources and destinations that you specified elsewhere in your configuration. The new test configuration is named **\<name of test configuration>_networkTestConfig**.
+ d. **Disable traceroute**: This checkbox applies when the protocol is TCP or ICMP. Select this box to stop sources from discovering topology and hop-by-hop RTT.
+ e. **Destination port**: You can provide a destination port of your choice.
+ f. **Listen on port**: This checkbox applies when the protocol is TCP. Select this checkbox to open the chosen TCP port if it's not already open.
+ g. **Test Frequency**: In this list, specify how frequently sources will ping destinations on the protocol and port that you specified.
+
+ You can choose 30 seconds, 1 minute, 5 minutes, 15 minutes, or 30 minutes. Select **custom** to enter another frequency that's between 30 seconds and 30 minutes. Sources will test connectivity to destinations based on the value that you choose. For example, if you select 30 seconds, sources will check connectivity to the destination at least once in every 30-second period.
+ h. **Success Threshold**: You can set thresholds on the following network parameters:
+
+ * **Checks failed**: Set the percentage of checks that can fail when sources check connectivity to destinations by using the criteria that you specified. For the TCP or ICMP protocol, the percentage of failed checks can be equated to the percentage of packet loss. For HTTP protocol, this value represents the percentage of HTTP requests that received no response.
+
+ * **Round trip time**: Set the RTT, in milliseconds, for how long sources can take to connect to the destination over the test configuration.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor.":::
+
+1. **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or non-Azure endpoints.
+
+ For selected Azure VMs or Azure virtual machine scale sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for non-Azure endpoints will be auto-enabled after the creation of Connection Monitor begins.
+
+ If the selected virtual machine scale set is set for manual upgrade, you'll have to upgrade the scale set after the Network Watcher extension installation. Doing so lets you continue setting up the Connection Monitor with virtual machine scale sets as endpoints. If the virtual machine scale set is set to auto-upgrade, you don't need to worry about upgrading after the installation of the Network Watcher extension.
+
+ In the previously mentioned scenario, you can consent to an auto-upgrade of virtual machine scale sets with auto-enabling of the Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgrading. This approach eliminates the need to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test group and consent for an auto-upgrade of the virtual machine scale set in Connection Monitor.":::
+ ## Create alerts in Connection Monitor You can set up alerts on tests that are failing based on the thresholds set in test configurations.
-In the Azure portal, to create alerts for a connection monitor, you specify values for these fields:
+In the Azure portal, to create alerts for a connection monitor, specify values for these fields:
+
+* **Create alert**: You can select this checkbox to create a metric alert in Azure Monitor. When you select this checkbox, the other fields will be enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
-- **Create alert**: You can select this check box to create a metric alert in Azure Monitor. When you select this check box, the other fields will be enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
+* **Scope** > **Resource** > **Hierarchy**: These values are automatically filled, based on the values specified on the **Basics** pane.
-- **Scope** > **Resource** > **Hierarchy**: These values are automatically filled, based on the values specified on the **Basics** tab.
+* **Condition name**: The alert is created on the `Test Result(preview)` metric. When the result of the connection monitor test is a failing result, the alert rule will fire.
-- **Condition name**: The alert is created on the `Test Result(preview)` metric. When the result of the connection monitor test is a failing result, the alert rule will fire.
+* **Action group name**: You can enter your email directly or you can create alerts via action groups. If you enter your email directly, an action group with the name **NPM Email ActionGroup** is created. The email ID is added to that action group. If you choose to use action groups, you need to select a previously created action group. To learn how to create an action group, see [Create action groups in the Azure portal](../azure-monitor/alerts/action-groups.md). After the alert is created, you can [manage your alerts](../azure-monitor/alerts/alerts-metric.md#view-and-manage-with-azure-portal).
-- **Action group name**: You can enter your email directly or you can create alerts via action groups. If you enter your email directly, an action group with the name **NPM Email ActionGroup** is created. The email ID is added to that action group. If you choose to use action groups, you need to select a previously created action group. To learn how to create an action group, see [Create action groups in the Azure portal](../azure-monitor/alerts/action-groups.md). After the alert is created, you can [manage your alerts](../azure-monitor/alerts/alerts-metric.md#view-and-manage-with-azure-portal).
+* **Alert rule name**: The name of the connection monitor.
-- **Alert rule name**: The name of the connection monitor.
+* **Enable rule upon creation**: Select this checkbox to enable the alert rule based on the condition. Disable this checkbox if you want to create the rule without enabling it.
-- **Enable rule upon creation**: Select this check box to enable the alert rule based on the condition. Disable this check box if you want to create the rule without enabling it.
+After you've completed all the steps, the process will proceed with a unified enabling of monitoring extensions for all endpoints without monitoring agents enabled, followed by the creation of the connection monitor.
-Once all the steps are completed, the process will proceed with the unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by creation of Connection Monitor.
-Once the creation process is successful, it will take ~ 5 mins for the connection monitor to show up on the dashboard.
+After the creation process is successful, it takes about 5 minutes for the connection monitor to be displayed on the dashboard.
-## Virtual machine scale set coverage
+## Virtual machine scale set coverage
-Currently, Connection Monitor provides default coverage for the scale set instances selected as endpoints. What this means is, only a default % of all the scale set instances added would be randomly selected to monitor connectivity from the scale set to the endpoint.
-As a best practice, to avoid loss of data due to downscaling of instances, it is advised to select ALL instances in a scale set while creating a test group instead of selecting a particular few for monitoring your endpoints.
+Currently, Connection Monitor provides default coverage for the scale set instances that are selected as endpoints. This means that only a default percentage of all the added scale set instances would be randomly selected to monitor connectivity from the scale set to the endpoint.
+As a best practice, to avoid loss of data because of a downscaling of instances, we recommend that you select *all* instances in a scale set while you're creating a test group, instead of selecting a particular few for monitoring your endpoints.
## Scale limits
Connection monitors have these scale limits:
## Clean up resources
-When no longer needed, delete the resource group and all of the resources it contains:
+When you no longer need the resources, delete the resource group and all the resources it contains:
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
-2. Select **Delete resource group**.
-3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+1. In the **Search** box at the top of the Azure portal, enter **myResourceGroup** and then, in the search results list, select it.
+1. Select **Delete resource group**.
+1. For **Resource group name**, enter **myResourceGroup**, and then select **Delete**.
## Next steps
-In this tutorial, you learned how to monitor a connection between a virtual machine scale set and a VM. You learned that a network security group rule prevented communication to a VM. To learn about all of the different responses the connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address.
-
-* Learn [how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts).
-* Learn [how to diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
+In this tutorial, you learned how to monitor a connection between a virtual machine scale set and a VM. You learned that a network security group rule prevented communication to a VM.
+To learn about all the different responses a connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address. See also:
+* [Analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts)
+* [Diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network)
> [!div class="nextstepaction"] > [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
openshift Cluster Administration Cluster Admin Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/cluster-administration-cluster-admin-role.md
Title: Azure Red Hat OpenShift cluster administrator role | Microsoft Docs description: Assignment and usage of the Azure Red Hat OpenShift cluster administrator role --++ Last updated 09/25/2019
openshift Dns Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/dns-forwarding.md
Title: Configure DNS Forwarding for Azure Red Hat OpenShift 4 description: Configure DNS Forwarding for Azure Red Hat OpenShift 4--++ Last updated 04/24/2020
openshift Howto Aad App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-aad-app-configuration.md
Title: Azure Active Directory integration for Azure Red Hat OpenShift description: Learn how to create an Azure AD security group and user for testing apps on your Microsoft Azure Red Hat OpenShift cluster.--++ Last updated 05/13/2019
openshift Howto Add Update Pull Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-add-update-pull-secret.md
Title: Add or update your Red Hat pull secret on an Azure Red Hat OpenShift 4 cluster description: Add or update your Red Hat pull secret on existing 4.x ARO clusters--++ Last updated 05/21/2020
openshift Howto Create Private Cluster 3X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-3x.md
Title: Create a private cluster with Azure Red Hat OpenShift 3.11 description: Learn how to create a private cluster with Azure Red Hat OpenShift 3.11 and about the benefits of private clusters.--++ Last updated 06/02/2022
openshift Howto Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-tenant.md
Title: Create an Azure AD tenant for Azure Red Hat OpenShift description: Here's how to create an Azure Active Directory (Azure AD) tenant to host your Microsoft Azure Red Hat OpenShift cluster.--++ Last updated 05/13/2019
openshift Howto Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-custom-dns.md
Title: Configure custom DNS resources in an Azure Red Hat OpenShift (ARO) cluster description: Discover how to add a custom DNS server on all of your nodes in Azure Red Hat OpenShift (ARO).--++ Last updated 06/02/2021
openshift Howto Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-setup-environment.md
Title: Set up your Azure Red Hat OpenShift development environment description: Here are the prerequisites for working with Microsoft Azure Red Hat OpenShift. keywords: red hat openshift setup set up--++ Last updated 11/04/2019
openshift Howto Spot Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-spot-nodes.md
Title: Use Azure Spot Virtual Machines in an Azure Red Hat OpenShift (ARO) cluster description: Discover how to utilize Azure Spot Virtual Machines in Azure Red Hat OpenShift (ARO)--++ keywords: spot, nodes, aro, deploy, openshift, red hat
openshift Howto Use Acr With Aro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-acr-with-aro.md
Title: Use Azure Container Registry with Azure Red Hat OpenShift description: Learn how to pull and run a container from Azure Container Registry in your Azure Red Hat OpenShift cluster.--++ Last updated 01/10/2021
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/intro-openshift.md
Title: Introduction to Azure Red Hat OpenShift description: Learn the features and benefits of Microsoft Azure Red Hat OpenShift to deploy and manage container-based applications.--++ Last updated 11/13/2020
openshift Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/migration.md
Title: Migrate from an Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4 description: Migrate from an Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4--++ Last updated 08/13/2020
openshift Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/responsibility-matrix.md
description: Learn about the ownership of responsibilities for the operation of
Last updated 4/12/2021--++ keywords: aro, openshift, az aro, red hat, cli, RACI, support
openshift Supported Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/supported-resources.md
Title: Supported resources for Azure Red Hat OpenShift 3.11 description: Understand which Azure regions and virtual machine sizes are supported by Microsoft Azure Red Hat OpenShift.--++ Last updated 05/15/2019
openshift Tutorial Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-connect-cluster.md
Title: Tutorial - Connect to an Azure Red Hat OpenShift 4 cluster description: Learn how to connect a Microsoft Azure Red Hat OpenShift cluster--++ Last updated 04/24/2020
openshift Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-delete-cluster.md
Title: Tutorial - Delete an Azure Red Hat OpenShift cluster description: In this tutorial, learn how to delete an Azure Red Hat OpenShift cluster using the Azure CLI-+ -+ Last updated 04/24/2020
orbital Sar Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/sar-reference-architecture.md
Title: SAR reference architecture - Azure Orbital Analytics
-description: Learn about how SAR data is processed horizontally.
+ Title: Process Synthetic Aperture Radar (SAR) data - Azure Orbital Analytics
+description: View a reference architecture that enables processing SAR/Remote Sensing data on Azure by using Apache Spark on Azure Synapse.
Previously updated : 10/11/2022 Last updated : 10/20/2022
-# SAR reference architecture
+# Process Synthetic Aperture Radar (SAR) data in Azure
SAR is a form of radar that is used to create two-dimensional images of three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target to provide finer spatial resolution than conventional stationary beam-scanning radars.
Additional contributors:
- [Azure Synapse](https://azure.microsoft.com/services/synapse-analytics) - [Apache Spark](https://spark.apache.org)
+ - [Argo](https://argoproj.github.io/)
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Alternatively, you can upload your installation binaries to a container in Azure
### Install IPOPP ```console
-tar --C $INSTALL_DIR -xzf DRL-IPOPP_4.1.tar.gz
+tar -xvzf DRL-IPOPP_4.1.tar.gz --directory $INSTALL_DIR
chmod -R 755 $INSTALL_DIR/IPOPP $INSTALL_DIR/IPOPP/install_ipopp.sh -installdir $INSTALL_DIR/drl -datadir $INSTALL_DIR/data -ingestdir $INSTALL_DIR/data/ingest ```
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) / staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net | | Azure Migrate (Microsoft.Migrate) / migrate projects, assessment project and discovery site | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
-| Azure Managed HSM (Microsoft.Keyvault/managedHSMs) / managedhsm | privatelink.managedhsm.azure.net | managedhsm.azure.net |
| Azure API Management (Microsoft.ApiManagement/service) / gateway | privatelink.azure-api.net </br> privatelink.developer.azure-api.net | azure-api.net </br> developer.azure-api.net | | Microsoft PowerBI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com | | Azure Bot Service (Microsoft.BotService/botServices) / Bot | botplinks.botframework.com | directline.botframework.com </br> europe.directline.botframework.com |
purview Create Microsoft Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-python.md
For more information about the governance capabilities of Microsoft Purview, for
while (getattr(pa,'provisioning_state')) != "Succeeded" : pa = (purview_client.accounts.get(rg_name, purview_name)) print(getattr(pa,'provisioning_state'))
- if getattr(pa,'provisioning_state') != "Failed" :
+ if getattr(pa,'provisioning_state') == "Failed" :
print("Error in creating Microsoft Purview account") break time.sleep(30)
HereΓÇÖs the full Python code:
while (getattr(pa,'provisioning_state')) != "Succeeded" : pa = (purview_client.accounts.get(rg_name, purview_name)) print(getattr(pa,'provisioning_state'))
- if getattr(pa,'provisioning_state') != "Failed" :
+ if getattr(pa,'provisioning_state') == "Failed" :
print("Error in creating Microsoft Purview account") break time.sleep(30)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 09/22/2022 Last updated : 10/19/2022
For more information about Microsoft Purview network settings, see [Use private
To create and run a new scan, do the following:
-1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
+
+2. Take note of Client ID(App ID).
+
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle."::: 1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
To create and run a new scan, do the following:
1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
-1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
+1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
+
+2. Take note of Client ID(App ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Microsoft Purview classifies data by using [RegEx](https://wikipedia.org/wiki/Re
The City, Country, and Place filters have been prepared using best datasets available for preparing the data.
-## Machine Learning model based classifications
-## Person Name
+## Machine Learning based classifications
+## Person's Name
Person Name machine learning model has been trained using global datasets of names in English language. > [!NOTE] > Microsoft Purview classifies full names stored in the same column as well as first/last names in separate columns.
+## Person's Address
+Person's address classification is used to detect full address stored in a single column containing the following elements: House number, Street Name, City, State, Country, Zip Code. Person's Address classifier uses machine learning model that is trained on the global addresses data set in English language.
## RegEx Classifications
No
## Australia business number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps-- ### Format 11 digits with optional delimiters
No
- ## Austria identity card
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
## Cyprus identity card
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps- ### Format 10 digits without spaces and delimiters
No
## Cyprus tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps- ### Format eight digits and one letter in the specified pattern
No
### Format
-Most common ethnic groups. For a reference list see this [article](https://en.wikipedia.org/wiki/List_of_contemporary_ethnic_groups).
+This classifier consists of the most common ethnic groups. For a reference list, see this [article](https://en.wikipedia.org/wiki/List_of_contemporary_ethnic_groups).
### Checksum Not applicable
No
- ## France health insurance number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## France value added tax number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Germany value added tax number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
## Greece driver's license number
-This entity is included in the EU Driver's License Number sensitive information type. It is also available as a stand-alone sensitive information type entity.
+This entity is included in the EU Driver's License Number sensitive information type. It's also available as a stand-alone sensitive information type entity.
### Format
No
- ## Greece Social Security Number (AMKA)
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Greece tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Hungary personal identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Hungary tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Hungary value added tax number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
### Pattern 12 digits:-- A digit which is not 0 or 1
+- A digit that is not 0 or 1
- Three digits - An optional space or dash - Four digits
Yes
## Italy driver's license number
-This type entity is included in the EU Driver's License Number sensitive information type. It is also available as a stand-alone sensitive information type entity.
+This type entity is included in the EU Driver's License Number sensitive information type. It's also available as a stand-alone sensitive information type entity.
### Format
No
- ## Italy fiscal code
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
not applicable
- ## Italy value added tax number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Japan My Number - Corporate
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Japan My Number - Personal
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Lithuania Personal Code
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Luxemburg national identification number natural persons
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Malta identity card number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
not applicable
- ## Netherlands tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Netherlands value added tax number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## New Zealand bank account number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## New Zealand driver's license number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## New Zealand inland revenue number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
## New Zealand social welfare number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps- ### Format nine digits
Yes
- ## Poland REGON number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Poland tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Romania personal numeric code (CNP)
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Russia passport number domestic
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Russia passport number international
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Slovakia personal number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
-nine or 10 digits containing optional backslash
+nine or ten digits containing optional backslash
### Pattern
No
- ## Slovenia Unique Master Citizen Number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Slovenia tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- ## Spain DNI
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Not applicable
## Spain social security number (SSN) - ### Format 11-12 digits
Yes
- ## Spain tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Sweden tax identification number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Switzerland SSN AHV number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## U.K. Unique Taxpayer Reference Number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Depends on the state
depends on the state - for example, New York: - nine digits formatted like ddd ddd ddd will match.-- nine digits like ddddddddd will not match.
+- nine digits like ddddddddd won't match.
### Checksum
No
## U.S. phone number ### Pattern-- 10 digit number, for e.g., +1 nxx-nxx-xxxx
+- 10 digit number, for example, +1 nxx-nxx-xxxx
- Optional area code: +1 - n can be any digit between 2-9 - x can be any digit between 0-9 - Optional paranthesis around the area code - Optional space or - between area code, exchange code, and the last four digits-- Optional 4 digit extension
+- Optional four digit extension
### Checksum Not applicable
Not applicable
## U.S. zipcode ### Format
-Five digit U.S. Zip code and an optional 4 digit code separated by a hyphen (-).
+Five digit U.S. Zip code and an optional four digit code separated by a hyphen (-).
### Checksum Not applicable
No
- ## Ukraine passport domestic
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- ## Ukraine passport international
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
remote-rendering Configure Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/configure-model-conversion.md
The schema is identical for converting triangular meshes and point clouds. Howev
## Settings for triangular meshes
-For converting a triangular mesh, for instance from an .fbx file, all parameters from the schema above do affect the outcome. The parameters are explained in detail now:
+When converting a triangular mesh, for instance from an .fbx file, all parameters from the schema above do affect the outcome. The parameters are explained in detail now:
### Geometry parameters
If a model is defined using gamma space, then these options should be set to tru
* `gammaToLinearVertex` - Convert :::no-loc text="vertex"::: colors from gamma space to linear space > [!NOTE]
-> For FBX, E57, PLY and XYZ files these settings are set to `true` by default. For all other file types, the default is `false`.
+> For FBX, E57, PLY, LAS, LAZ and XYZ files these settings are set to `true` by default. For all other file types, the default is `false`.
### Scene parameters
The properties that do have an effect on point cloud conversion are:
* `scaling` - same meaning as for triangular meshes. * `recenterToOrigin` - same meaning as for triangular meshes. * `axis` - same meaning as for triangular meshes. Default values are `["+x", "+y", "+z"]`, however most point cloud data will be rotated compared to renderer's own coordinate system. To compensate, in most cases `["+x", "+z", "-y"]` fixes the rotation.
-* `gammaToLinearVertex` - similar to triangular meshes, this flag indicates whether point colors should be converted from gamma space to linear space. Default value for point cloud formats (E57, PLY and XYZ) is true.
+* `gammaToLinearVertex` - similar to triangular meshes, this flag indicates whether point colors should be converted from gamma space to linear space. Default value for point cloud formats (E57, PLY, LAS, LAZ and XYZ) is true.
* `generateCollisionMesh` - similar to triangular meshes, this flag needs to be enabled to support [spatial queries](../../overview/features/spatial-queries.md). But unlike for triangular meshes, this flag doesn't incurs longer conversion times, larger output file sizes, or longer runtime loading times. So disabling this flag can't be considered an optimization.
A simple way to test whether instancing information gets preserved during conver
#### Example: Instancing setup in 3ds Max
-[Autodesk 3ds Max](https://www.autodesk.de/products/3ds-max) has distinct object cloning modes called **`Copy`**, **`Instance`**, and **`Reference`** that behave differently with regard to instancing in the exported `.fbx` file.
+[Autodesk 3ds Max](https://www.autodesk.de/products/3ds-max) has distinct object cloning modes called **`Copy`**, **`Instance`**, and **`Reference`** that behave differently regarding instancing in the exported `.fbx` file.
![Cloning in 3ds Max](./media/3dsmax-clone-object.png)
Because lighting is already baked into the textures, no dynamic lighting is need
### Use case: Visualization of compact machines, etc.
-In these use cases, the models often have very high detail within a small volume. The renderer is heavily optimized to handle such cases well. However, most of the optimizations mentioned in the previous use case don't apply here:
+In these use cases, the models often have high detail within a small volume. The renderer is heavily optimized to handle such cases well. However, most of the optimizations mentioned in the previous use case don't apply here:
* Individual parts should be selectable and movable, so the `sceneGraphMode` must be left to `dynamic`. * Ray casts are typically an integral part of the application, so collision meshes must be generated.
remote-rendering Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/model-conversion.md
# Convert models
-Azure Remote Rendering allows you to render very complex models. To achieve maximum performance, the data must be preprocessed to be in an optimal format. Depending on the amount of data, this step might take a while. It would be impractical, if this time was spent during model loading. Also, it would be wasteful to repeat this process for multiple sessions.
+Azure Remote Rendering allows you to render complex models. To achieve maximum performance, the data must be preprocessed to be in an optimal format. Depending on the amount of data, this step might take a while. It would be impractical, if this time was spent during model loading. Also, it would be wasteful to repeat this process for multiple sessions.
For these reasons, ARR service provides a dedicated *conversion service*, which you can run ahead of time. Once converted, a model can be loaded from an Azure Storage Account.
The conversion service supports these formats:
* **FBX** (version 2011 to version 2020) * **GLTF**/**GLB** (version 2.x)
-There are minor differences between the formats with regard to material property conversion, as listed in chapter [material mapping for model formats](../../reference/material-mapping.md).
+There are minor differences between the formats regarding material property conversion, as listed in chapter [material mapping for model formats](../../reference/material-mapping.md).
### Point clouds
There are minor differences between the formats with regard to material property
In case any other properties exist, they're ignored during ingestion. * **E57** : E57 contains two types of data: `data3d` and `image2d`. The conversion service only loads the `data3d` part of the file, while the `image2d` part of the file is being ignored.
+* **LAS**, **LAZ** : In case color data isn't present, the intensity attribute is used as color.
## The conversion process
remote-rendering System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/system-requirements.md
For Unity 2020, use latest version of Unity 2020.3.
For Unity 2021, use latest version of Unity 2021.3.
+### WMR vs. OpenXR
+
+In Unity 2019 and Unity 2020, you can still choose between the WMR (Windows Mixed Reality) and OpenXR plugin. WMR has been deprecated for Unity 2021 and onwards. A known limitation of the WMR version is that it doesn't work in linear color space.
+ ## Next steps * [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
search Cognitive Search Custom Skill Form https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-form.md
In this Azure Cognitive Search skillset example, you'll learn how to create a Fo
## Train your model
-You'll need to train a Form Recognizer model with your input forms before you use this skill. Follow the [cURL quickstart](../applied-ai-services/form-recognizer/quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api) to learn how to train a model. You can use the sample forms provided in that quickstart, or you can use your own data. Once the model is trained, copy its ID value to a secure location.
+You'll need to train a Form Recognizer model with your input forms before you use this skill. Follow the [cURL quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) to learn how to train a model. You can use the sample forms provided in that quickstart, or you can use your own data. Once the model is trained, copy its ID value to a secure location.
## Set up the custom skill
In this guide, you created a custom skill from the Azure Form Recognizer service
* [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md) * [Define a skillset](cognitive-search-defining-skillset.md) * [Create a skillset (REST)](/rest/api/searchservice/create-skillset)
-* [Map enriched fields](cognitive-search-output-field-mapping.md)
+* [Map enriched fields](cognitive-search-output-field-mapping.md)
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-debug-session.md
Previously updated : 07/12/2022 Last updated : 10/20/2022 # Debug Sessions in Azure Cognitive Search
Debug Sessions is a visual editor that works with an existing skillset in the Az
## How a debug session works
-When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a new blob container created by the Azure Cognitive Search service in an Azure Storage account that you provide. The name of the generated container has a prefix of "ms-az-cognitive-search-debugsession".
+When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a new blob container created by the Azure Cognitive Search service in an Azure Storage account that you provide. The name of the generated container has a prefix of "ms-az-cognitive-search-debugsession". The Azure Storage flow to choose the target storage account where the debug session data is exported always requests a container to be chosen by the user. This is omitted by default to avoid debug session data to be exported by mistake to a customer created container that may have data not related to the debug session.
A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes will not affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
Previously updated : 06/15/2022 Last updated : 10/19/2022 # Debug an Azure Cognitive Search skillset in Azure portal
A Debug Session works with all generally available [indexer data sources](search
The debug session begins by executing the indexer and skillset on the selected document. The document's content and metadata created will be visible and available in the session.
-A debug session can be canceled while it's executing using the **Cancel** button.
+A debug session can be canceled while it's executing using the **Cancel** button. If you hit the **Cancel** button you should be able to analyze partial results.
+
+It is expected for a debug session to take longer to execute than the indexer since it goes through extra processing.
+ ## Start with errors and warnings
If skills produce output but the search index is empty, check the field mappings
## Debug a custom skill locally
-Custom skills can be more challenging to debug because the code runs externally. This section describes how to locally debug your Custom Web API skill, debug session, Visual Studio Code and [ngrok](https://ngrok.com/docs). This technique works with custom skills that execute in [Azure Functions](../azure-functions/functions-overview.md) or any other Web Framework that runs locally (for example, [FastAPI](https://fastapi.tiangolo.com/)).
+Custom skills can be more challenging to debug because the code runs externally, so the debug session can't be used to debug them. This section describes how to locally debug your Custom Web API skill, debug session, Visual Studio Code and [ngrok](https://ngrok.com/docs). This technique works with custom skills that execute in [Azure Functions](../azure-functions/functions-overview.md) or any other Web Framework that runs locally (for example, [FastAPI](https://fastapi.tiangolo.com/)).
### Run ngrok
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Title: Indexing with Azure Cosmos DB for MongoDB
description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data in Azure Cosmos DB for MongoDB. --++ Previously updated : 07/12/2022 Last updated : 10/20/2022 # Indexing with Azure Cosmos DB for MongoDB
These are the limitations of this feature:
+ Custom queries aren't supported for specifying the data set.
-+ The column name `_ts` is a reserved word. If you need this field, consider alternative solutions for populating an index. You could use the [push API](search-what-is-data-import.md). Or, you could use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) with an Azure Cognitive Search index as the sink.
++ The column name `_ts` is a reserved word. If you need this field, consider alternative solutions for populating an index. +
+As an alternative to this connector, if your scenario has any of those requirements, you could use the [Push API/SDK](search-what-is-data-import.md) or consider [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) with an [Azure Cognitive Search index](../data-factory/connector-azure-search.md) as the sink.
## Define the data source
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
Microsoft Azure Services each support one or more of the encryption at rest mode
### Azure disk encryption
-Any customer using Azure Infrastructure as a Service (IaaS) features can achieve encryption at rest for their IaaS VMs and disks through Azure Disk Encryption. For more information on Azure Disk encryption, see [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md).
+Any customer using Azure Infrastructure as a Service (IaaS) features can achieve encryption at rest for their IaaS VMs and disks through Azure Disk Encryption. For more information on Azure Disk encryption, see [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../virtual-machines/windows/disk-encryption-overview.md).
#### Azure storage
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
The following recommended playbooks, and other similar playbooks are available t
- **Notification playbooks** are triggered when an alert or incident is created and send a notification to a configured destination:
- - [Post a message in a Microsoft Teams channel](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Teams/Playbooks/Post-Message-Teams)
+ - [Post a message in a Microsoft Teams channel](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Post-Message-Teams)
- [Send an Outlook email notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification)
- - [Post a message in a Slack channel](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Post-Message-Slack)
+ - [Post a message in a Slack channel](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SentinelSOARessentials/Playbooks/Post-Message-Slack)
- **Blocking playbooks** are triggered when an alert or incident is created, gather entity information like the account, IP address, and host, and blocks them from further actions:
sentinel Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices.md
Schedule the following Microsoft Sentinel activities regularly to ensure continu
## Integrate with Microsoft security services
-Microsoft Sentinel is empowered by the components that send data to your workspace, and is made stronger through integrations with other Microsoft services. Any logs ingested into products such as Microsoft Defender for Cloud Apps, Microsoft Defender for Endpoint, and Microsoft Defender for Identity allow these services to create detections, and in turn provide those detections to Microsoft Sentinel. Logs can also be ingested directly into Microsoft Sentinel to provide a fuller picture for events and incidents.
+Microsoft Sentinel is empowered by the components that send data to your workspace, and is made stronger through integrations with other Microsoft services. Any logs ingested into products such as Microsoft Defender for Cloud Apps, Microsoft Defender for Endpoint, and Microsoft Defender for Identity allow these services to create detections, and in turn provide those detections to Microsoft Sentinel. Logs can also be ingested directly into Microsoft Sentinel to provide a fuller picture of events and incidents.
For example, the following image shows how Microsoft Sentinel ingests data from other Microsoft services and multi-cloud and partner platforms to provide coverage for your environment:
sentinel Ci Cd Custom Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md
A sample repository is available with demonstrating the deployment config file a
For more information, see: - [Sentinel CICD repositories sample](https://github.com/SentinelCICD/RepositoriesSampleContent)-- [Create Resource Manager parameter file](/../../azure/azure-resource-manager/templates/parameter-files.md)-- [Parameters in ARM templates](/../../azure/azure-resource-manager/templates/parameters.md)
+- [Create Resource Manager parameter file](/azure/azure-resource-manager/templates/parameter-files)
+- [Parameters in ARM templates](/azure/azure-resource-manager/templates/parameters)
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention (DLP)**.
+Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention (DLP)** and **Azure Active Directory Identity Protection (AADIP)**.
-The connector also lets you stream **advanced hunting** events from *all* of the above components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
+The connector also lets you stream **advanced hunting** events from *all* of the above Defender components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
For more information about incident integration and advanced hunting event collection, see [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
Verify that you've satisfied the [prerequisites](#prerequisites-for-active-direc
| **[EmailPostDeliveryEvents](/microsoft-365/security/defender/advanced-hunting-emailpostdeliveryevents-table)** | Security events that occur post-delivery, after Microsoft 365 has delivered the emails to the recipient mailbox | | **[EmailUrlInfo](/microsoft-365/security/defender/advanced-hunting-emailurlinfo-table)** | Information about URLs on emails |
- # [Defender for Identity (New!)](#tab/MDI)
+ # [Defender for Identity](#tab/MDI)
| Table name | Events type | |-|-|
Verify that you've satisfied the [prerequisites](#prerequisites-for-active-direc
| **[IdentityLogonEvents](/microsoft-365/security/defender/advanced-hunting-identitylogonevents-table)** | Authentication activities made through your on-premises Active Directory, as captured by Microsoft Defender for Identity <br><br>Authentication activities related to Microsoft online services, as captured by Microsoft Defender for Cloud Apps | | **[IdentityQueryEvents](/microsoft-365/security/defender/advanced-hunting-identityqueryevents-table)** | Information about queries performed against Active Directory objects such as users, groups, devices, and domains |
- # [Defender for Cloud Apps (New!)](#tab/MDCA)
+ # [Defender for Cloud Apps](#tab/MDCA)
| Table name | Events type | |-|-| | **[CloudAppEvents](/microsoft-365/security/defender/advanced-hunting-cloudappevents-table)** | Information about activities in various cloud apps and services covered by Microsoft Defender for Cloud Apps |
- # [Defender alerts (New!)](#tab/MDA)
+ # [Defender alerts](#tab/MDA)
| Table name | Events type | |-|-|
In the **Next steps** tab, youΓÇÖll find some useful workbooks, sample queries,
## Next steps
-In this document, you learned how to integrate Microsoft 365 Defender incidents, and advanced hunting event data from Microsoft Defender for Endpoint and Defender for Office 365, into Microsoft Sentinel, using the Microsoft 365 Defender connector. To learn more about Microsoft Sentinel, see the following articles:
+In this document, you learned how to integrate Microsoft 365 Defender incidents, and advanced hunting event data from Microsoft Defender component services, into Microsoft Sentinel, using the Microsoft 365 Defender connector. To learn more about Microsoft Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunting.md
Use queries before, during, and after a compromise to take the following actions
> > - Now in public preview, you can also create hunting and livestream queries over data stored in Azure Data Explorer. For more information, see details of [constructing cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md) in the Azure Monitor documentation. >
-> - Use community resources, such as the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries) to find additional queries and data sources.
+> - Use community resources, such as the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries), to find additional queries and data sources.
## Use the hunting dashboard
The following table describes detailed actions available from the hunting dashbo
| Action | Description | | | |
-| **See how queries apply to your environment** | Select the **Run all queries (Preview)** button, or select a subset of queries using the check boxes to the left of each row and select the **Run selected queries (Preview)** button. <br><br>Running your queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried. |
+| **See how queries apply to your environment** | Select the **Run all queries (Preview)** button, or select a subset of queries using the checkboxes to the left of each row and select the **Run selected queries (Preview)** button. <br><br>Running your queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried. |
| **View the queries that returned results** | After your queries are done running, view the queries that returned results using the **Results** filter: <br>- Sort to see which queries had the most or fewest results. <br>- View the queries that are not at all active in your environment by selecting *N/A* in the **Results** filter. <br>- Hover over the info icon (**i**) next to the *N/A* to see which data sources are required to make this query active. | | **Identify spikes in your data** | Identify spikes in the data by sorting or filtering on **Results delta** or **Results delta percentage**. <br><br>This compares the results of the last 24 hours against the results of the previous 24-48 hours, highlighting any large differences or relative difference in volume. | | **View queries mapped to the MITRE ATT&CK tactic** | The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. <br><br>This enables you to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters. |
In the example above, start with the table name SecurityEvent and add piped elem
1. Add a filter in the query to only show event ID 4688.
-1. Add a filter in the query on the CommandLine to contain only instances of cscript.exe.
+1. Add a filter in the query on the command line to contain only instances of cscript.exe.
1. Project only the columns you're interested in exploring and limit the results to 1000 and select **Run query**.
During the hunting and investigation process, you may come across query results
- Investigate a single bookmarked finding by selecting the bookmark and then clicking **Investigate** in the details pane to open the investigation experience. You can also directly select a listed entity to view that entityΓÇÖs corresponding entity page.
- You can also create an incident from one or more bookmarks or add one or more bookmarks to an existing incident. Select a checkbox to the left of any bookmarks you want to use, and then select **Incident actions** > **Create new incident** or **Add to existing incident**. Triage and investigate the incident like any other.
+ You can also create an incident from one or more bookmarks, or add one or more bookmarks to an existing incident. Select a checkbox to the left of any bookmarks you want to use, and then select **Incident actions** > **Create new incident** or **Add to existing incident**. Triage and investigate the incident like any other.
> [!TIP] > Bookmarks stand to represent key events that are noteworthy and should be escalated to incidents if they are severe enough to warrant an investigation. Events such as potential root causes, indicators of compromise, or other notable events should be raised as a bookmark.
For more information, see:
- [The Infosec Jupyter Book](https://infosecjupyterbook.com) - [Real Python tutorials](https://realpython.com)
-The following table describes some methods of using Juypter notebooks to help your processes in Microsoft Sentinel:
+The following table describes some methods of using Jupyter notebooks to help your processes in Microsoft Sentinel:
|Method |Description | |||
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
This integration gives Microsoft 365 security incidents the visibility to be man
Other services whose alerts are collected by Microsoft 365 Defender include: - **Microsoft Purview Data Loss Prevention (DLP)** ([Learn more](/microsoft-365/security/defender/investigate-dlp))
+- **Azure Active Directory Identity Protection (AADIP)** ([Learn more](/defender-cloud-apps/aadip-integration))
In addition to collecting alerts from these components and other services, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
Once you have enabled the Microsoft 365 Defender data connector to [collect inci
- Incidents will be ingested and synchronized at no extra cost.
-Once the Microsoft 365 Defender integration is connected, all the component alert connectors (Defender for Endpoint, Defender for Identity, Defender for Office 365, Defender for Cloud Apps) will be automatically connected in the background if they weren't already. If any component licenses were purchased after Microsoft 365 Defender was connected, the alerts and incidents from the new product will still flow to Microsoft Sentinel with no additional configuration or charge.
+Once the Microsoft 365 Defender integration is connected, the connectors for all the integrated components and services (Defender for Endpoint, Defender for Identity, Defender for Office 365, Defender for Cloud Apps, Azure Active Directory Identity Protection) will be automatically connected in the background if they weren't already. If any component licenses were purchased after Microsoft 365 Defender was connected, the alerts and incidents from the new product will still flow to Microsoft Sentinel with no additional configuration or charge.
## Microsoft 365 Defender incidents and Microsoft incident creation rules
Once the Microsoft 365 Defender integration is connected, all the component aler
- Using both mechanisms together is completely supported, and can be used to facilitate the transition to the new Microsoft 365 Defender incident creation logic. Doing so will, however, create **duplicate incidents** for the same alerts. -- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft 365 products (Defender for Endpoint, Defender for Identity, and Defender for Office 365, and Defender for Cloud Apps) when connecting Microsoft 365 Defender. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft 365 Defender incident integration.
+- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft 365 Defender-integrated products (Defender for Endpoint, Defender for Identity, Defender for Office 365, Defender for Cloud Apps, and Azure Active Directory Identity Protection) when connecting Microsoft 365 Defender. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft 365 Defender incident integration.
> [!NOTE] > All Microsoft Defender for Cloud Apps alert types are now being onboarded to Microsoft 365 Defender.
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="dstapptype"></a>**DstAppType** | Optional | AppType | The type of the destination application. For a list of allowed values and further information, refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [DstAppName](#dstappname) or [DstAppId](#dstappid) are used. | | <a name="dstprocessname"></a>**DstProcessName** | Optional | String | The file name of the process that terminated the network session. This name is typically considered to be the process name. <br><br>Example: `C:\Windows\explorer.exe` | | <a name="process"></a>**Process** | Alias | | Alias to the [DstProcessName](#dstprocessname) <br><br>Example: `C:\Windows\System32\rundll32.exe`|
-| **SrcProcessId**| Optional | String | The process ID (PID) of the process that terminated the network session.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
-| **SrcProcessGuid** | Optional | String | A generated unique identifier (GUID) of the process that terminated the network session. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` |
+| **DstProcessId**| Optional | String | The process ID (PID) of the process that terminated the network session.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **DstProcessGuid** | Optional | String | A generated unique identifier (GUID) of the process that terminated the network session. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` |
### Source system fields
If the event is reported by one of the endpoints of the network session, it migh
The following are the changes in version 0.2.1 of the schema: - Added `Src` and `Dst` as aliases to a leading identifier for the source and destination systems.-- Added the fields `**`NetworkConnectionHistory`**`, `**`SrcVlanId`**`, `**`DstVlanId`**`, `InnerVlanId`, and `OuterVlanId`.
+- Added the fields `NetworkConnectionHistory`, `SrcVlanId`, `DstVlanId`, `InnerVlanId`, and `OuterVlanId`.
The following are the changes in version 0.2.2 of the schema:
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
If you're a partner who wants to create your own solution, see the [Microsoft Se
## Prerequisites
-In order to install, update or delete solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](/../role-based-access-control.md/built-in-roles#template-spec-contributor) for details on this role.
+In order to install, update or delete solutions in content hub, you need the **Template Spec Contributor** role at the resource group level. See [Azure RBAC built in roles](/azure/role-based-access-control/built-in-roles#template-spec-contributor) for details on this role.
This is in addition to Sentinel specific roles. For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
### Health intelligence sharing community (H-ISAC) -- [Join the H-ISAC](https://h-isac.org/soltra/) to get the credentials to access this feed.
+- [Join the H-ISAC](https://h-isac.org/) to get the credentials to access this feed.
### IBM X-Force
Besides being used to import threat indicators, threat intelligence feeds can al
### ReversingLabs TitaniumCloud -- Find and enable incident enrichment playbooks for [ReversingLabs](https://www.reversinglabs.com/products/file-reputation-service) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/ReversingLabs/Playbooks/Enrich-SentinelIncident-ReversingLabs-File-Information).
+- Find and enable incident enrichment playbooks for [ReversingLabs](https://www.reversinglabs.com/products/file-reputation-service) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/ReversingLabs/Playbooks/ReversingLabs-EnrichFileHash).
- See the ReversingLabs Intelligence Logic App [connector documentation](/connectors/reversinglabsintelligence/). ### RiskIQ Passive Total
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## October 2022
+- [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip)
+- [Out of the box anomaly detection on the SAP audit log (Preview)](#out-of-the-box-anomaly-detection-on-the-sap-audit-log-preview)
+
+### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
+
+As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) will be integrating [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between two levels of integration:
+
+- **Selective alerts** (default) includes only alerts chosen by Microsoft security researchers, mostly of Medium and High severities.
+- **All alerts** includes all AADIP alerts of any severity.
+
+This integration can't be disabled.
+
+Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 365 Defender integration](microsoft-365-defender-sentinel-integration.md) enabled will automatically start receiving AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
+
+- If you already have your AADIP connector enabled in Microsoft Sentinel, and you've enabled incident creation, you may receive duplicate incidents. To avoid this, you have a few choices, listed here in descending order of preference:
+
+ | Preference | Action in Microsoft 365 Defender | Action in Microsoft Sentinel |
+ | - | - | - |
+ | **1** | Keep the default AADIP integration of **Selective alerts**. | Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **2** | Choose the **All alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose either option for AADIP integration. | Create automation rules to close all incidents where <br>- the *incident provider* is `Microsoft 365 Defender` and <br>- the *alert provider* is `Azure Active Directory Identity Protection`. <br><br>Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+
+- If you don't have your [AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection) enabled, you must enable it. Be sure **not** to enable incident creation on the connector page. If you don't enable the connector, you may receive AADIP incidents without any data in them.
+
+- If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection will be made automatically behind the scenes. You won't need to do anything else.
+ ### Out of the box anomaly detection on the SAP audit log (Preview) The SAP audit log records audit and security events on SAP systems, like failed sign-in attempts or other over 200 security related actions. Customers monitor the SAP audit log and generate alerts and incidents out of the box using Microsoft Sentinel built-in analytics rules.
Learn how to [add an entity to your threat intelligence](add-entity-to-threat-in
## August 2022 -- [Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#heads-up-microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip) - [Azure resource entity page (Preview)](#azure-resource-entity-page-preview) - [New data sources for User and entity behavior analytics (UEBA) (Preview)](#new-data-sources-for-user-and-entity-behavior-analytics-ueba-preview) - [Microsoft Sentinel Solution for SAP is now generally available](#microsoft-sentinel-solution-for-sap-is-now-generally-available)
-### Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
-
-[Microsoft 365 Defender](/microsoft-365/security/defender/) is gradually rolling out the integration of [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents.
-
-Microsoft Sentinel customers with the [Microsoft 365 Defender connector](microsoft-365-defender-sentinel-integration.md) enabled will automatically start receiving AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
--- If you already have your AADIP connector enabled in Microsoft Sentinel, you may receive duplicate incidents. To avoid this, you have a few choices, listed here in descending order of preference:-
- - Disable incident creation in your AADIP data connector.
-
- - Disable AADIP integration at the source, in your Microsoft 365 Defender portal.
-
- - Create an automation rule in Microsoft Sentinel to automatically close incidents created by the [Microsoft Security analytics rule](create-incidents-from-alerts.md) that creates AADIP incidents.
--- If you don't have your AADIP connector enabled, you may receive AADIP incidents, but without any data in them. To correct this, simply [enable your AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection). Be sure **not** to enable incident creation on the connector page.--- If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection will be made automatically behind the scenes. You won't need to do anything else.- ### Azure resource entity page (Preview) Azure resources such as Azure Virtual Machines, Azure Storage Accounts, Azure Key Vault, Azure DNS, and more are essential parts of your network. Threat actors might attempt to obtain sensitive data from your storage account, gain access to your key vault and the secrets it contains, or infect your virtual machine with malware. The new [Azure resource entity page](entity-pages.md) is designed to help your SOC investigate incidents that involve Azure resources in your environment, hunt for potential attacks, and assess risk.
service-bus-messaging Service Bus Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-quickstart-portal.md
Title: Use the Azure portal to create a Service Bus queue
description: In this quickstart, you learn how to create a Service Bus namespace and a queue in the namespace by using the Azure portal. Previously updated : 09/10/2021 Last updated : 10/20/2022
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-samples.md
Title: Azure Service Bus samples or examples
description: Azure Service Bus messaging samples or examples that demonstrate key features. Previously updated : 07/23/2021 Last updated : 10/19/2022
service-fabric How To Deploy Service Fabric Application System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-service-fabric-application-system-assigned-managed-identity.md
Last updated 07/11/2022
In order to access the managed identity feature for Azure Service Fabric applications, you must first enable the Managed Identity Token Service on the cluster. This service is responsible for the authentication of Service Fabric applications using their managed identities, and for obtaining access tokens on their behalf. Once the service is enabled, you can see it in Service Fabric Explorer under the **System** section in the left pane, running under the name **fabric:/System/ManagedIdentityTokenService** next to other system services. > [!NOTE]
-> Deployment of Service Fabric applications with managed identities are supported starting with API version `"2019-06-01-preview"`. You can also use the same API version for application type, application type version and service resources. The minimum supported Service Fabric runtime is 6.5 CU2. In additoin, the build / package environment should also have the SF .Net SDK at CU2 or higher
+> Deployment of Service Fabric applications with managed identities are supported starting with API version `"2019-06-01-preview"`. You can also use the same API version for application type, application type version and service resources. The minimum supported Service Fabric runtime is 6.5 CU2. In addition, the build / package environment should also have the SF .Net SDK at CU2 or higher
## System-assigned managed identity
site-recovery Physical Server Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-server-enable-replication.md
Previously updated : 09/21/2022 Last updated : 10/20/2022 # Enable replication for a physical server ΓÇô Modernized
Lists all the machines discovered by various appliances registered to the vault
- **Managed disks**
- By default, Standard HDD managed disks are created in Azure. Select **Customize** to customize the type of Managed disks. Choose the type of disk based on the business requirement. Ensure to [choose the appropriate disk type](../virtual-machines/disks-types.md#disk-type-comparison) based on the IOPS of the source machine disks. For pricing information, see [managed disk pricing](/pricing/details/managed-disks/).
+ By default, Standard HDD managed disks are created in Azure. Select **Customize** to customize the type of Managed disks. Choose the type of disk based on the business requirement. Ensure to [choose the appropriate disk type](../virtual-machines/disks-types.md#disk-type-comparison) based on the IOPS of the source machine disks. For pricing information, see [managed disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
>[!Note] >If Mobility Service is installed manually before enabling replication, you can change the type of managed disk, at a disk level. Otherwise, one managed disk type can be chosen at a machine level by default.
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
The following list shows the Tanzu Buildpacks available in Azure Spring Apps Ent
- tanzu-buildpacks/java-azure - tanzu-buildpacks/dotnet-core - tanzu-buildpacks/go
+- tanzu-buildpacks/web-servers
- tanzu-buildpacks/nodejs - tanzu-buildpacks/python
Not all Tanzu Buildpacks support all service binding types. The following table
|Go |❌|❌|❌|✅|❌| |Python|❌|❌|❌|❌|❌| |NodeJS|❌|✅|✅|✅|✅|
+|[WebServers](how-to-enterprise-deploy-static-file.md)|❌|❌|❌|✅|❌|
To edit service bindings for the builder, select **Edit**. After a builder is bound to the service bindings, the service bindings are enabled for an app deployed with the builder.
spring-apps How To Enterprise Deploy Non Java Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-non-java-apps.md
Your application must conform to the following restrictions:
The following table indicates the features supported for each language.
-| Feature | Java | Python | Node | .NET Core | Go |
-|--||--||--|-|
-| App lifecycle management | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Assign endpoint | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Azure Monitor | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Out of box APM integration | ✔️ | ❌ | ❌ | ❌ | ❌ |
-| Blue/green deployment | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| API portal for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | ❌ | ❌ | ❌ |
-| VMware Tanzu® Service Registry | ✔️ | ❌ | ❌ | ❌ | ❌ |
-| VNET | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Outgoing IP Address | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| E2E TLS | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | ❌ | ❌ | ❌ |
-| Bring your own storage | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Integrate service binding with Resource Connector | ✔️ | ❌ | ❌ | ❌ | ❌ |
-| Availability Zone | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| App Lifecycle events | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Automate app deployments with Terraform and Azure Pipeline Task | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Soft Deletion | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| SLA | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Feature | Java | Python | Node | .NET Core | Go |[Static Files](how-to-enterprise-deploy-static-file.md)|
+|--||--||--|-|--|
+| App lifecycle management | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Assign endpoint | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Azure Monitor | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Out of box APM integration | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Blue/green deployment | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| API portal for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| VMware Tanzu® Service Registry | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| VNET | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Outgoing IP Address | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| E2E TLS | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Bring your own storage | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Integrate service binding with Resource Connector | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Availability Zone | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| App Lifecycle events | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Automate app deployments with Terraform and Azure Pipeline Task | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Soft Deletion | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| SLA | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
## Next steps
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
+
+ Title: Deploy static files in Azure Spring Apps Enterprise tier
+
+description: Learn how to deploy static files in Azure Spring Apps Enterprise tier.
++++ Last updated : 10/19/2022+++
+# Deploy static files in Azure Spring Apps Enterprise tier
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to deploy your static files to Azure Spring Apps Enterprise tier. The static files are served by web servers such as Nginx or Apache HTTP Server.
+
+## Prerequisites
+
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+- One or more applications running in Azure Spring Apps. For more information on creating apps, see [How to Deploy Spring Boot applications from Azure CLI](./how-to-launch-from-source.md).
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.0.67 or higher.
+- Your static files or dynamic front-end application.
+
+## Deploy your static files
+
+You can deploy static files to Azure Spring Apps using NGINX or HTTPD web servers in the following ways:
+
+- You can deploy static files directly. Azure Spring Apps automatically configures the specified web server to serve the static files.
+- You can create your front-end application in the JavaScript framework of your choice, and then deploy your dynamic front-end application as static content.
+- You can create a server configuration file to customize the web server.
+
+### Deploy static files directly
+
+Use the following command to deploy static files directly using an auto-generated default server configuration file.
+
+```azurecli
+az spring app deploy
+ --resource-group <your-resource-group-name> \
+ --service <your-Azure-Spring-Apps-name> \
+ --name <your-app-name> \
+ --source-path <path-to-source-code> \
+ --build-env BP_WEB_SERVER=nginx
+```
+
+For more information, see the [Configure an auto-generated server configuration file](#configure-an-auto-generated-server-configuration-file) section of this article.
+
+### Deploy your front-end application as static content
+
+Use the following command to deploy a dynamic front-end application as static content.
+
+```azurecli
+az spring app deploy
+ --resource-group <your-resource-group-name> \
+ --service <your-Azure-Spring-Apps-name> \
+ --name <your-app-name> \
+ --source-path <path-to-source-code> \
+ --build-env BP_WEB_SERVER=nginx BP_NODE_RUN_SCRIPTS=build BP_WEB_SERVER_ROOT=build
+```
+
+### Deploy static files using a customized configuration file
+
+Use the following command to deploy static files using a customized server configuration file.
+
+```azurecli
+az spring app deploy
+ --resource-group <your-resource-group-name> \
+ --service <your-Azure-Spring-Apps-name> \
+ --name <your-app-name> \
+ --source-path <path-to-source-code>
+```
+
+For more information, see the [Using a customized server configuration file](#using-a-customized-server-configuration-file) section of this article.
+
+## Sample code
+
+> [!NOTE]
+> The sample code is maintained by the Paketo open source community.
+
+The [Paketo buildpacks samples](https://github.com/paketo-buildpacks/samples/tree/main/web-servers) demonstrate common use cases for several different application types, including the following use cases:
+
+- Serving static files with a default server configuration file using `BP_WEB_SERVER` to select either [HTTPD](https://github.com/paketo-buildpacks/samples/blob/main/web-servers/no-config-file-sample/HTTPD.md) or [NGINX](https://github.com/paketo-buildpacks/samples/blob/main/web-servers/no-config-file-sample/NGINX.md).
+- Using Node Package Manager to build a [React app](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/javascript-frontend-sample) into static files that can be served by a web server. Use the following steps:
+ 1. Define a script under the `scripts` property of the *package.json* file that builds your production-ready static assets. For React, it's `build`.
+ 1. Find out where static assets are stored after the build script runs. For React, static assets are stored in `./build` by default.
+ 1. Set `BP_NODE_RUN_SCRIPTS` to the name of the build script.
+ 1. Set `BP_WEB_SERVER_ROOT` to the build output directory.
+- Serving static files with your own server configuration file, using either [HTTPD](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/httpd-sample) or [NGINX](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/nginx-sample).
+
+## Configure an auto-generated server configuration file
+
+You can use environment variables to modify the auto-generated server configuration file. The following table shows supported environment variables.
+
+| Environment Variable | Supported Value | Description |
+||-|-|
+| `BP_WEB_SERVER` | *nginx* or *httpd* | Specifies the web server type, either *nginx* for Nginx or *httpd* for Apache HTTP server. Required when using the auto-generated server configuration file. |
+| `BP_WEB_SERVER_ROOT` | An absolute file path or a file path relative to */workspace*. | Sets the root directory for the static files. The default is `public`. |
+| `BP_WEB_SERVER_ENABLE_PUSH_STATE` | *true* or *false* | Enables push state routing for your application. Regardless of the route that is requested, *https://docsupdatetracker.net/index.html* is always served. Useful for single-page web applications. |
+| `BP_WEB_SERVER_FORCE_HTTPS` | *true* or *false* | Enforces HTTPS for server connections by redirecting all requests to use the HTTPS protocol. |
+
+The following environment variables aren't supported.
+
+- `BP_LIVE_RELOAD_ENABLED`
+- `BP_NGINX_VERSION`
+- `BP_HTTPD_VERSION`
+
+## Using a customized server configuration file
+
+You can configure web server by using a customized server configuration file. Your configuration file must conform to the restrictions described in the following table.
+
+| Configuration | Description | Nginx Configuration | Httpd Configuration |
+|--|-|-|--|
+| Listening port | Web server must listen on port 8080. The service checks the port on TCP for readiness and whether it's live. You must use the templated variable `PORT` in the configuration file. The appropriate port number is injected when the web server is launched. | `listen {{PORT}}` | `Listen "${PORT}"` |
+| Log path | Config log path to the console. | `access_log /dev/stdout`, `error_log stderr` | `ErrorLog /proc/self/fd/2` |
+| File path with write permission | Web server is granted write permission to the */tmp* directory. Configuring the full path requires write permission under the */tmp* directory. | For example: *client_body_temp_path /tmp/client_body_temp* | |
+| Maximum accepted body size of client request | Web server is behind the gateway. The maximum accepted body size of the client request is set to 500m in the gateway and the value for web server must be less than 500m. | `client_max_body_size` should be less than 500m. | `LimitRequestBody` should be less than 500m. |
+
+## Buildpack bindings
+
+Deploying static files to Azure Spring Apps Enterprise tier supports the Dynatrace buildpack binding. The `htpasswd` buildpack binding isn't supported.
+
+For more information, see the [Buildpack bindings](how-to-enterprise-build-service.md#buildpack-bindings) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
+
+## Common build and deployment errors
+
+Your deployment of static files to Azure Spring Apps Enterprise tier may generate the following common build errors:
+
+- ERROR: No buildpack groups passed detection.
+- ERROR: Please check that you're running against the correct path.
+- ERROR: failed to detect: no buildpacks participating
+
+The root cause of these errors is that the web server type isn't specified. To resolve these errors, set the environment variable `BP_WEB_SERVER` to *nginx* or *httpd*.
+
+The following table describes common deployment errors when you deploy static files to Azure Spring Apps Enterprise tier.
+
+| Error message | Root cause | Solution |
+|--|-|--|
+| *112404: Exit code 0: purposely stopped, please refer to `https://aka.ms/exitcode`* | The web server failed to start. | Validate your server configuration file to see if there's a configuration error. Then, check whether your configuration file conforms to the restrictions described in the [Using a customized server configuration file](#using-a-customized-server-configuration-file) section. |
+| *mkdir() "/var/client_body_temp" failed (13: Permission denied)* | The web server doesn't have write permission to the specified path. | Configure the path under the directory */tmp*; for example: */tmp/client_body_temp*. |
+
+## Next steps
+
+- [Azure Spring Apps](index.yml)
spring-apps Troubleshoot Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-exit-code.md
The exit code indicates the reason the application terminated. The following lis
For example, you need to connect to Azure Key Vault to import certificates in your application, but your application doesn't have the necessary permissions to access it.
+ - If your application is a static file or dynamic front-end application served by a web server, see the [Common build and deployment errors](how-to-enterprise-deploy-static-file.md#common-build-and-deployment-errors) section of [Deploy static files in Azure Spring Apps Enterprise tier](how-to-enterprise-deploy-static-file.md).
+ - **137** - The application exited because of an out-of-memory error. The application requested resources that the hosting platform failed to provide. Update your application's Java Virtual Machine (JVM) parameters to restrict resource usage or scale up application resources. If the application is a Java application, check the JVM parameter values. They may exceed the memory limit of your application.
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Title: Host keys for SFTP support for Azure Blob Storage (preview) | Microsoft Docs
+ Title: Host keys for SFTP support for Azure Blob Storage| Microsoft Docs
description: Find a list of valid host keys when using an SFTP client to connect with Azure Blob Storage.
-# Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
+# Host keys for SSH File Transfer Protocol (SFTP) support for Azure Blob Storage
This article contains a list of valid host keys used to connect to Azure Blob Storage from SFTP clients. Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. For more information, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
-When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. During the public preview, you can verify the host key by finding that key in the list presented in this article.
-
-> [!IMPORTANT]
-> SFTP support is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
+When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. You can verify the host key by finding that key in the list presented in this article.
## Valid host keys
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Title: Limitations & known issues with SFTP in Azure Blob Storage (preview) | Microsoft Docs
+ Title: Limitations & known issues with SFTP in Azure Blob Storage| Microsoft Docs
description: Learn about limitations and known issues of SSH File Transfer Protocol (SFTP) support for Azure Blob Storage. Previously updated : 09/13/2022 Last updated : 10/20/2022
-# Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
+# Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage
This article describes limitations and known issues of SFTP support for Azure Blob Storage.
-> [!IMPORTANT]
-> SFTP support is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
- > [!IMPORTANT] > Because you must enable hierarchical namespace for your account to use SFTP, all of the known issues that are described in the Known issues with [Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account. ## Known unsupported clients
-The following clients are known to be incompatible with SFTP for Azure Blob Storage (preview). See [Supported algorithms](secure-file-transfer-protocol-support.md#supported-algorithms) for more information.
+The following clients are known to be incompatible with SFTP for Azure Blob Storage. See [Supported algorithms](secure-file-transfer-protocol-support.md#supported-algorithms) for more information.
- Five9 - Kemp
To transfer files to or from Azure Blob Storage via SFTP clients, see the follow
## Authentication and authorization -- _Local users_ is the only form of identity management that is currently supported for the SFTP endpoint.
+- _Local users_ are the only form of identity management that is currently supported for the SFTP endpoint.
- Azure Active Directory (Azure AD) isn't supported for the SFTP endpoint.
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- Internet routing is not supported. Use Microsoft network routing. -- There's a 2 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
+- There's a 2 minute time out for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
## Other - For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md). -- Maximum file upload size via the SFTP endpoint is 100 GB.
+- Maximum file upload size via the SFTP endpoint is 91 GB.
- Special containers such as $logs, $blobchangefeed, $root, $web aren't accessible via the SFTP endpoint.
storage Secure File Transfer Protocol Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-performance.md
Title: SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage (preview) | Microsoft Docs
+ Title: SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage | Microsoft Docs
description: Optimize the performance of your SSH File Transfer Protocol (SFTP) requests by using the recommendations in this article. Previously updated : 09/13/2022 Last updated : 10/20/2022
-# SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage (preview)
+# SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage
Blob storage now supports the SSH File Transfer Protocol (SFTP). This article contains recommendations that will help you to optimize the performance of your storage requests. To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
-> [!IMPORTANT]
-> SFTP support is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
- ## Use concurrent connections to increase throughput Azure Blob Storage scales linearly until it reaches the maximum storage account egress and ingress limit. Therefore, your applications can achieve higher throughput by using more client connections. To view storage account egress and ingress limits, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md).
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Title: Connect to Azure Blob Storage using SFTP (preview) | Microsoft Docs
+ Title: Connect to Azure Blob Storage using SFTP | Microsoft Docs
description: Learn how to enable SFTP support for Azure Blob Storage so that you can directly connect to your Azure Storage account by using an SFTP client. Previously updated : 10/10/2022 Last updated : 10/20/2022
-# Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)
+# Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)
You can securely connect to the Blob Storage endpoint of an Azure Storage account by using an SFTP client, and then upload and download files. This article shows you how to enable SFTP, and then connect to Blob Storage by using an SFTP client. To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer Protocol (SFTP) in Azure Blob Storage](secure-file-transfer-protocol-support.md).
-> [!IMPORTANT]
-> SFTP support is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
- ## Prerequisites - A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as you create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
You can use any SFTP client to securely connect and then transfer files. The fol
> The SFTP username is `storage_account_name`.`username`. In the example above the `storage_account_name` is "contoso4" and the `username` is "contosouser." The combined username becomes `contoso4.contosouser` for the SFTP command. > [!NOTE]
-> You might be prompted to trust a host key. During the public preview, valid host keys are published [here](secure-file-transfer-protocol-host-keys.md).
+> You might be prompted to trust a host key. Valid host keys are published [here](secure-file-transfer-protocol-host-keys.md).
After the transfer is complete, you can view and manage the file in the Azure portal.
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Title: SFTP support for Azure Blob Storage (preview) | Microsoft Docs
+ Title: SFTP support for Azure Blob Storage | Microsoft Docs
description: Blob storage now supports the SSH File Transfer Protocol (SFTP). Previously updated : 09/29/2022 Last updated : 10/20/2022
-# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
+# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support lets you securely connect to Blob Storage via an SFTP endpoint, allowing you to use SFTP for file access, file transfer, and file management. > [!IMPORTANT]
-> SFTP support is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To help us understand your scenario, please complete [this form](https://forms.office.com/r/gZguN0j65Y) before you begin using SFTP support. After you've tested your end-to-end scenarios with SFTP, please share your experience by using [this form](https://forms.office.com/r/MgjezFV1NR). Both of these forms are optional.
+> SFTP support for Azure Blob Storage is not yet generally available in the West Europe region.
Here's a video that tells you more about it.
Prior to the release of this feature, if you wanted to use SFTP to transfer data
Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single click. Then you can set up local user identities for authentication to connect to your storage account with SFTP via port 22.
-This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
+This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md).
> [!Note] > SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured then all requests will receive a disconnect from the service.
This article describes SFTP support for Azure Blob Storage. To learn how to enab
SFTP support requires hierarchical namespace to be enabled. Hierarchical namespace organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance.
-Different protocols are supported by the hierarchical namespace. SFTP is one of these available protocols.
+Different protocols are supported by the hierarchical namespace. SFTP is one of these available protocols. The following image shows storage access via multiple protocols and REST APIs. For easier reading, this image uses the term Gen2 REST to refer to the Azure Data Lake Storage Gen2 REST API.
> [!div class="mx-imgBorder"] > ![hierarchical namespace](./media/secure-file-transfer-protocol-support/hierarchical-namespace-and-sftp-support.png)
To get started, enable SFTP support, create a local user, and assign permissions
### Known supported clients
-The following clients have compatible algorithm support with SFTP for Azure Blob Storage (preview). See [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) if you're having trouble connecting.
+The following clients have compatible algorithm support with SFTP for Azure Blob Storage. See [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) if you're having trouble connecting.
- AsyncSSH 2.1.0+ - Axway
See the [limitations and known issues article](secure-file-transfer-protocol-kno
## Pricing and billing
-> [!IMPORTANT]
-> During the public preview, the use of SFTP does not incur any additional charges. However, the standard transaction, storage, and networking prices for the underlying Azure Data Lake Store Gen2 account still apply. SFTP might incur additional charges when the feature becomes generally available.
+Enabling the SFTP endpoint has a cost of $0.30 per hour. We will start applying this hourly cost on or after December 1, 2022.
-Transaction and storage costs are based on factors such as storage account type and the endpoint that you use to transfer data to the storage account. To learn more, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md#understand-the-full-billing-model-for-azure-blob-storage).
+Transaction, storage, and networking prices for the underlying storage account apply. To learn more, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md#understand-the-full-billing-model-for-azure-blob-storage).
## See also
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a premium block
- [Known issues with Network File System (NFS) 3.0 protocol support in Azure Blob Storage](network-file-system-protocol-known-issues.md) -- [Known issues with SSH File Transfer Protocol (SFTP) support in Azure Blob Storage (preview)](secure-file-transfer-protocol-known-issues.md)
+- [Known issues with SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-known-issues.md)
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
description: In this quickstart, you learn how to use the Azure Blob Storage cli
Previously updated : 10/07/2022 Last updated : 10/20/2022
You can also authorize requests to Azure Blob Storage by using the account acces
The order and locations in which `DefaultAzureCredential` looks for credentials can be found in the [Azure Identity library overview](/java/api/overview/azure/identity-readme#defaultazurecredential). - For example, your app can authenticate using your Visual Studio Code sign-in credentials with when developing locally. Your app can then use a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) once it has been deployed to Azure. No code changes are required for this transition. #### Assign roles to your Azure AD user account
Deleting the local source and downloaded files...
Done ```
-Before you begin the clean-up process, check your *data* folder for the two files. You can open them and observe that they're identical.
+Before you begin the cleanup process, check your *data* folder for the two files. You can compare them and observe that they're identical.
+
+## Clean up resources
-After you've verified the files, press the **Enter** key to delete the test files and finish the demo.
+After you've verified the files and finished testing, press the **Enter** key to delete the test files along with the container you created in the storage account. You can also use [Azure CLI](storage-quickstart-blobs-cli.md#clean-up-resources) to delete resources.
## Next steps
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Title: 'Quickstart: Azure Blob Storage library v12 - Python'
-description: In this quickstart, you learn how to use the Azure Blob Storage client library version 12 for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
+ Title: 'Quickstart: Azure Blob Storage client library for Python'
+description: In this quickstart, you learn how to use the Azure Blob Storage client library for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
Previously updated : 09/26/2022 Last updated : 10/20/2022
# Quickstart: Azure Blob Storage client library for Python
-Get started with the Azure Blob Storage client library for Python to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks.
+Get started with the Azure Blob Storage client library for Python to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks in an interactive console app.
[API reference documentation](/python/api/azure-storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Samples](../common/storage-samples-python.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) ## Prerequisites -- An Azure account with an active subscription - [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account - [create a storage account](../common/storage-account-create.md).-- [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
+- Azure account with an active subscription - [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio)
+- Azure Storage account - [create a storage account](../common/storage-account-create.md)
+- [Python](https://www.python.org/downloads/) 3.6+
## Setting up
-This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for Python.
+This section walks you through preparing a project to work with the Azure Blob Storage client library for Python.
### Create the project
-Create a Python application named *blob-quickstart-v12*.
+Create a Python application named *blob-quickstart*.
-1. In a console window (such as PowerShell, cmd, or bash), create a new directory for the project.
+1. In a console window (such as PowerShell or Bash), create a new directory for the project:
```console
- mkdir blob-quickstart-v12
+ mkdir blob-quickstart
```
-1. Switch to the newly created *blob-quickstart-v12* directory.
+1. Switch to the newly created *blob-quickstart* directory:
```console
- cd blob-quickstart-v12
+ cd blob-quickstart
```
-### Install the package
+### Install the packages
-From the project directory, install the Azure Blob Storage client library for Python package by using the `pip install` command.
+From the project directory, install packages for the Azure Blob Storage and Azure Identity client libraries using the `pip install` command. The **azure-identity** package is needed for passwordless connections to Azure services.
```console
-pip install azure-storage-blob
+pip install azure-storage-blob azure-identity
```
-This command installs the Azure Blob Storage for Python package and libraries on which it depends. In this case, the only dependency is the Azure core library for Python.
- ### Set up the app framework From the project directory, follow steps to create the basic structure of the app:
-1. Open a new text file in your code editor
-1. Add `import` statements, create the structure for the program, and include basic exception handling, as shown below
-1. Save the new file as *blob-quickstart-v12.py* in the *blob-quickstart-v12* directory.
-
+1. Open a new text file in your code editor.
+1. Add `import` statements, create the structure for the program, and include basic exception handling, as shown below.
+1. Save the new file as *blob-quickstart.py* in the *blob-quickstart* directory.
## Object model
Use the following Python classes to interact with these resources:
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library for Python: -- [Get the connection string](#get-the-connection-string-for-authentication)
+- [Authenticate the client](#authenticate-the-client)
- [Create a container](#create-a-container) - [Upload blobs to a container](#upload-blobs-to-a-container) - [List the blobs in a container](#list-the-blobs-in-a-container) - [Download blobs](#download-blobs) - [Delete a container](#delete-a-container)
-### Get the connection string for authentication
+### Authenticate the client
+
+Application requests to Azure Blob Storage must be authorized. Using the `DefaultAzureCredential` class provided by the Azure Identity client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
+
+You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example.
+
+### [Passwordless (Recommended)](#tab/managed-identity)
+
+`DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
+
+The order and locations in which `DefaultAzureCredential` looks for credentials can be found in the [Azure Identity library overview](/python/api/overview/azure/identity-readme#defaultazurecredential).
+
+For example, your app can authenticate using your Azure CLI sign-in credentials with when developing locally. Your app can then use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) once it has been deployed to Azure. No code changes are required for this transition.
+
+#### Assign roles to your Azure AD user account
++
+#### Sign in and connect your app code to Azure using DefaultAzureCredential
+
+You can authorize access to data in your storage account using the following steps:
+
+1. Make sure you're authenticated with the same Azure AD account you assigned the role to on your storage account. You can authenticate via the Azure CLI, Visual Studio Code, or Azure PowerShell.
+
+ #### [Azure CLI](#tab/sign-in-azure-cli)
+
+ Sign-in to Azure through the Azure CLI using the following command:
+
+ ```azurecli
+ az login
+ ```
+
+ #### [Visual Studio Code](#tab/sign-in-visual-studio-code)
+
+ You'll need to [install the Azure CLI](/cli/azure/install-azure-cli) to work with `DefaultAzureCredential` through Visual Studio Code.
+
+ On the main menu of Visual Studio Code, navigate to **Terminal > New Terminal**.
+
+ Sign-in to Azure through the Azure CLI using the following command:
+
+ ```azurecli
+ az login
+ ```
+
+ #### [PowerShell](#tab/sign-in-powershell)
+
+ Sign-in to Azure using PowerShell via the following command:
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
-The code below retrieves the storage account connection string from the environment variable created in the [Configure your storage connection string](#configure-your-storage-connection-string) section.
+2. To use `DefaultAzureCredential`, make sure that the **azure-identity** package is [installed](#install-the-packages), and the class is imported:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ ```
+
+3. Add this code inside the `try` block. When the code runs on your local workstation, `DefaultAzureCredential` uses the developer credentials of the prioritized tool you're logged into to authenticate to Azure. Examples of these tools include Azure CLI or Visual Studio Code.
+
+ :::code language="python" source="~/azure-storage-snippets/blobs/quickstarts/python/blob-quickstart.py" id="Snippet_CreateServiceClientDAC":::
+
+4. Make sure to update the storage account name in the URI of your `BlobServiceClient` object. The storage account name can be found on the overview page of the Azure portal.
+
+ :::image type="content" source="./media/storage-quickstart-blobs-python/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name.":::
+
+ > [!NOTE]
+ > When deployed to Azure, this same code can be used to authorize requests to Azure Storage from an application running in Azure. However, you'll need to enable managed identity on your app in Azure. Then configure your storage account to allow that managed identity to connect. For detailed instructions on configuring this connection between Azure services, see the [Auth from Azure-hosted apps](/dotnet/azure/sdk/authentication-azure-hosted-apps) tutorial.
+
+### [Connection String](#tab/connection-string)
+
+A connection string includes the storage account access key and uses it to authorize requests. Always be careful to never expose the keys in an unsecure location.
+
+> [!NOTE]
+> If you plan to use connection strings, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](/azure/role-based-access-control/resource-provider-operations#microsoftstorage). The least privilege built-in role with permissions for this action is [Storage Account Key Operator Service Role](/azure/role-based-access-control/built-in-roles#storage-account-key-operator-service-role), but any role which includes this action will work.
++
+#### Configure your storage connection string
+
+After you copy the connection string, write it to a new environment variable on the local machine running the application. To set the environment variable, open a console window, and follow the instructions for your operating system. Replace `<yourconnectionstring>` with your actual connection string.
+
+**Windows**:
+
+```cmd
+setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"
+```
+
+After you add the environment variable in Windows, you must start a new instance of the command window.
+
+**Linux**:
+
+```bash
+export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
+```
+
+The code below retrieves the connection string for the storage account from the environment variable created earlier, and uses the connection string to construct a service client object.
Add this code inside the `try` block:
+```python
+# Retrieve the connection string for use with the application. The storage
+# connection string is stored in an environment variable on the machine
+# running the application called AZURE_STORAGE_CONNECTION_STRING. If the environment variable is
+# created after the application is launched in a console or with Visual Studio,
+# the shell or application needs to be closed and reloaded to take the
+# environment variable into account.
+connect_str = os.getenv('AZURE_STORAGE_CONNECTION_STRING')
+
+# Create the BlobServiceClient object
+blob_service_client = BlobServiceClient.from_connection_string(connect_str)
+```
+
+> [!IMPORTANT]
+> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
++ ### Create a container
Decide on a name for the new container. The code below appends a UUID value to t
> [!IMPORTANT] > Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-Create an instance of the [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) class by calling the [from_connection_string](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient#from-connection-string-conn-str--credential-none-kwargs-) method. Then, call the [create_container](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient#create-container-name--metadata-none--public-access-none-kwargs-) method to actually create the container in your storage account.
+Call the [create_container](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient#create-container-name--metadata-none--public-access-none-kwargs-) method to actually create the container in your storage account.
Add this code to the end of the `try` block: ### Upload blobs to a container
The following code snippet:
Add this code to the end of the `try` block: ### List the blobs in a container
List the blobs in the container by calling the [list_blobs](/python/api/azure-st
Add this code to the end of the `try` block: ### Download blobs
Download the previously created blob by calling the [download_blob](/python/api/
Add this code to the end of the `try` block: ### Delete a container
The app pauses for user input by calling `input()` before it deletes the blob, c
Add this code to the end of the `try` block: ## Run the code This app creates a test file in your local folder and uploads it to Azure Blob Storage. The example then lists the blobs in the container, and downloads the file with a new name. You can compare the old and new files.
-Navigate to the directory containing the *blob-quickstart-v12.py* file, then execute the following `python` command to run the app.
+Navigate to the directory containing the *blob-quickstart.py* file, then execute the following `python` command to run the app:
```console
-python blob-quickstart-v12.py
+python blob-quickstart.py
```
-The output of the app is similar to the following example:
+The output of the app is similar to the following example (UUID values omitted for readability):
```output
-Azure Blob Storage v12 - Python quickstart sample
+Azure Blob Storage Python quickstart sample
Uploading to Azure Storage as blob:
- quickstartcf275796-2188-4057-b6fb-038352e35038.txt
+ quickstartUUID.txt
Listing blobs...
- quickstartcf275796-2188-4057-b6fb-038352e35038.txt
+ quickstartUUID.txt
Downloading blob to
- ./data/quickstartcf275796-2188-4057-b6fb-038352e35038DOWNLOAD.txt
+ ./data/quickstartUUIDDOWNLOAD.txt
Press the Enter key to begin clean up
Before you begin the cleanup process, check your *data* folder for the two files
## Clean up resources
-After you've verified the files and finished testing, press the **Enter** key to delete the test files along with the container you created in the storage account.
+After you've verified the files and finished testing, press the **Enter** key to delete the test files along with the container you created in the storage account. You can also use [Azure CLI](storage-quickstart-blobs-cli.md#clean-up-resources) to delete resources.
## Next steps
In this quickstart, you learned how to upload, download, and list blobs using Py
To see Blob storage sample apps, continue to: > [!div class="nextstepaction"]
-> [Azure Blob Storage SDK v12 Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples)
+> [Azure Blob Storage library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples)
- To learn more, see the [Azure Storage client libraries for Python](/azure/developer/python/sdk/storage/overview). - For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/python/).
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 10/13/2022 Last updated : 10/20/2022
Azure AD Kerberos doesn't support using MFA to access Azure file shares configur
## Assign share-level permissions
-When you enable identity-based access, you can set for each share which users and groups have access to that particular share. Once a user is allowed into a share, NTFS permissions on individual files and folders take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
+When you enable identity-based access, you can set for each share which users and groups have access to that particular share. Once a user is allowed into a share, Windows ACLs (also called NTFS permissions) on individual files and directories take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
To set share-level permissions, follow the instructions in [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md).
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
The StorSimple Data Manager and Azure file shares have a few limitations you sho
* Any volume placed on [Windows Server Dynamic Disks](/troubleshoot/windows-server/backup-and-storage/best-practices-using-dynamic-disks) is not supported. (deprecated before Windows Server 2012) * The service doesn't work with volumes that are BitLocker encrypted or have [Data Deduplication](/windows-server/storage/data-deduplication/understand) enabled. * Corrupted StorSimple backups can't be migrated.
-* Special networking options, such as firewalls or private endpoint-only communication can't be enabled on either the source storage account where StorSimple backups are stored, nor on the target storage account that holds you Azure file shares.
+* Special networking options, such as firewalls or private endpoint-only communication can't be enabled on either the source storage account where StorSimple backups are stored, nor on the target storage account that holds your Azure file shares.
### File fidelity
After your storage accounts are created, go to the **File share** section of the
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-share.png" alt-text="An Azure portal screenshot showing the new file share UI."::: :::column-end::: :::column:::
- </br>**Name**</br>Lowercase letters, numbers, and hyphens are supported.</br></br>**Quota**</br>Quota here is comparable to an SMB hard quota on a Windows Server instance. The best practice is to not set a quota here because your migration and other services will fail when the quota is reached.</br></br>**Tiers**</br>Select **Transaction optimized** for your new file share. During the migration, many transactions will occur. Its more cost efficient to change your tier later to the tier best suited to your workload.
+ </br>**Name**</br>Lowercase letters, numbers, and hyphens are supported.</br></br>**Quota**</br>Quota here is comparable to an SMB hard quota on a Windows Server instance. The best practice is to not set a quota here because your migration and other services will fail when the quota is reached.</br></br>**Tiers**</br>Select **Transaction optimized** for your new file share. During the migration, many transactions will occur. It's more cost efficient to change your tier later to the tier best suited to your workload.
:::column-end::: :::row-end:::
In the job blade that opens, you can see your job's current status and a list of
The migration jobs have two columns in the list of backups that list any issues that may have occurred during the copy: * Copy errors </br>This column lists files or folders that should have been copied but weren't. These errors are often recoverable. When a backup lists item issues in this column, review the copy logs. If you need to migrate these files, select **Retry backup**. This option will become available once the backup finished processing. The [Managing a migration job](#manage-a-migration-job) section explains your options in more detail.
-* Unsupported files </br>This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't pause for these kind of errors. Retrying migration of the backup won't change the result. When a backup lists item issues in this column, review the copy logs and take note. If such issues arise in your last backup and you found in the copy log that the failure was due to a file name, path length or other issue you have influence over, you may want to remedy the issue in the live StorSImple volume, take a StorSimple volume backup and create a new migration job with just that backup. You will then migrate this remedied namespace and it will become the most recent / live version of the Azure file share. This is a manual and time consuming process. Review the copy logs carefully and evaluate if it's worth it.
+* Unsupported files </br>This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't pause for these kinds of errors. Retrying migration of the backup won't change the result. When a backup lists item issues in this column, review the copy logs and take note. If such issues arise in your last backup and you found in the copy log that the failure was due to a file name, path length or other issue you have influence over, you may want to remedy the issue in the live StorSImple volume, take a StorSimple volume backup and create a new migration job with just that backup. You will then migrate this remedied namespace and it will become the most recent / live version of the Azure file share. This is a manual and time consuming process. Review the copy logs carefully and evaluate if it's worth it.
These copy logs are *\*.csv* files listing namespace items succeeded and items that failed to get copied. The errors are further split into the previously discussed categories. From the log file location, you can find logs for failed files by searching for "failed". The result should be a set of logs for files that failed to copy. Sort these logs by size. There may be extra logs produced at 17 bytes in size. They are empty and can be ignored. With a sort, you can focus on the logs with content.
When using the StorSimple Data Manager migration service, either an entire migra
| |*Could not find file &lt;path&gt; </br>Could not find a part of the path* |The job definition allows you to provide a source sub-path. This error is shown when that path does not exist. For instance: *\Share1 > \Share\Share1* </br> In this example you've specified *\Share1* as a sub-path in the source, mapping to another sub-path in the target. However, the source path does not exist (was misspelled?). Note: Windows is case preserving but not case dependent. Meaning specifying *\Share1* and *\share1* is equivalent. Also: Target paths that don't exist will be automatically created. | | |*This request is not authorized to perform this operation* |This error shows when the source StorSimple storage account or the target storage account with the Azure file share has a firewall setting enabled. You must allow traffic over the public endpoint and not restrict it with further firewall rules. Otherwise the Data Transformation Service will be unable to access either storage account, even if you authorized it. Disable any firewall rules and re-run the job. | |**Copying Files** |*The account being accessed does not support HTTP* |This is an Azure Files bug that is being fixed. The temporary mitigation is to disable internet routing on the target storage account or use the Microsoft routing endpoint. |
-| |*The specified share is full* |If the target is a premium Azure file share, ensure you have provisioned sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard Azure file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
+| |*The specified share is full* |If the target is a premium Azure file share, ensure you have provisioned sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard Azure file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
### Item level errors
During the copy phase of a migration job run, individual namespace items (files
|**Copy** |*-2146233088 </br>The server is busy.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. | | |*-2146233088 </br>Operation could not be completed within the specified time.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. | | |*Upload timed out or copy not started* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*-2146233029 </br>The operation was cancelled.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
+| |*-2146233029 </br>The operation was canceled.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
| |*1920 </br>The file cannot be accessed by the system.* |This is a common error when the migration engine encounters a reparse point, link, or junction. They are not supported. These types of files can't be copied. Review the [Known limitations](#known-limitations) section and the [File fidelity](#file-fidelity) section in this article. | | |*-2147024891 </br>Access is denied* |This is an error for files that are encrypted in a way that they can't be accessed on the disk. Files that can be read from disk but simply have encrypted content are not affected and can be copied. Your only option is to copy them manually. You can find such items by mounting the affected volume and running the following command: `get-childitem <path> [-Recurse] -Force -ErrorAction SilentlyContinue | Where-Object {$_.Attributes -ge "Encrypted"} | format-list fullname, attributes` | | |*Not a valid Win32 FileTime. Parameter name: fileTime* |In this case, the file can be accessed but can't be evaluated for copy because a timestamp the migration engine depends on is either corrupted or was written by an application in an incorrect format. There is not much you can do, because you can't change the timestamp in the backup. If retaining this file is important, perhaps on the latest version (last backup containing this file) you manually copy the file, fix the timestamp, and then move it to the target Azure file share. This option doesn't scale very well but is an option for high-value files where you want to have at least one version retained in your target. |
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
Title: Create an Azure file share
+ Title: Create an SMB Azure file share
-description: How to create an Azure file share by using the Azure portal, PowerShell, or the Azure CLI.
+description: How to create an SMB Azure file share by using the Azure portal, PowerShell, or Azure CLI.
Previously updated : 07/27/2021 Last updated : 10/20/2022
-# Create an Azure file share
+# Create an SMB Azure file share
To create an Azure file share, you need to answer three questions about how you will use it: - **What are the performance requirements for your Azure file share?**
For more information on these three choices, see [Planning for an Azure Files de
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Prerequisites-- This article assumes that you have already created an Azure subscription. If you don't already have a subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- This article assumes that you've already created an Azure subscription. If you don't already have a subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-az-ps).-- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli).
+- If you intend to use Azure CLI, [install the latest version](/cli/azure/install-azure-cli).
## Create a storage account Azure file shares are deployed into *storage accounts*, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares.
The advanced section contains several important settings for Azure file shares:
:::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-secure-transfer.png" alt-text="A screenshot of secure transfer enabled in the advanced settings for the storage account."::: -- **Large file shares**: This field enables the storage account for file shares spanning up to 100 TiB. Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options. Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file share capability. FileStorage storage accounts (storage accounts for premium file shares) do not have this option, as all premium file shares can scale up to 100 TiB.
+- **Large file shares**: This field enables the storage account for file shares spanning up to 100 TiB. Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options. Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file share capability. FileStorage storage accounts (storage accounts for premium file shares) don't have this option, as all premium file shares can scale up to 100 TiB.
:::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-large-file-shares.png" alt-text="A screenshot of the large file share setting in the storage account's advanced blade.":::
-The other settings that are available in the advanced tab (hierarchical namespace for Azure Data Lake storage gen 2, default blob tier, NFSv3 for blob storage, etc.) do not apply to Azure Files.
+The other settings that are available in the advanced tab (hierarchical namespace for Azure Data Lake storage gen 2, default blob tier, NFSv3 for blob storage, etc.) don't apply to Azure Files.
> [!Important]
-> Selecting the blob access tier does not affect the tier of the file share.
+> Selecting the blob access tier doesn't affect the tier of the file share.
#### Tags Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. These are optional and can be applied after storage account creation.
The final step to create the storage account is to select the **Create** button
# [PowerShell](#tab/azure-powershell) To create a storage account using PowerShell, we will use the `New-AzStorageAccount` cmdlet. This cmdlet has many options; only the required options are shown. To learn more about advanced options, see the [`New-AzStorageAccount` cmdlet documentation](/powershell/module/az.storage/new-azstorageaccount).
-To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique.
+To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish; however, note that the storage account name must be globally unique.
```powershell $resourceGroupName = "myResourceGroup"
$storAcct = New-AzStorageAccount `
``` # [Azure CLI](#tab/azure-cli)
-To create a storage account using the Azure CLI, we will use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the [`az storage account create` command documentation](/cli/azure/storage/account).
+To create a storage account using Azure CLI, we will use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the [`az storage account create` command documentation](/cli/azure/storage/account).
To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique.
az storage account create \
-### Enable large files shares on an existing account
-Before you create an Azure file share on an existing storage account, you may want to enable it for large file shares (up to 100 TiB) if you haven't already. Standard storage accounts using either LRS or ZRS can be upgraded to support large file shares without causing downtime for existing file shares on the storage account. If you have a GRS, GZRS, RA-GRS, or RA-GZRS account, you will need to convert it to an LRS account before proceeding.
+### Enable large file shares on an existing account
+Before you create an Azure file share on an existing storage account, you might want to enable large file shares (up to 100 TiB) on the storage account if you haven't already. Standard storage accounts using either LRS or ZRS can be upgraded to support large file shares without causing downtime for existing file shares on the storage account. If you have a GRS, GZRS, RA-GRS, or RA-GZRS account, you'll need to convert it to an LRS account before proceeding.
# [Portal](#tab/azure-portal) 1. Open the [Azure portal](https://portal.azure.com), and navigate to the storage account where you want to enable large file shares.
Standard file shares may be deployed into one of the standard tiers: transaction
The **quota** property means something slightly different between premium and standard file shares: -- For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. If a quota is not specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property is not set for a storage account). If you did not create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-files-shares-on-an-existing-account) for how to enable 100 TiB file shares.
+- For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. If a quota isn't specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property is not set for a storage account). If you did not create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-file-shares-on-an-existing-account) for how to enable 100 TiB file shares.
- For premium file shares, quota means **provisioned size**. The provisioned size is the amount that you will be billed for, regardless of actual usage. The IOPS and throughput available on a premium file share is based on the provisioned size. For more information on how to plan for a premium file share, see [provisioning premium file shares](understanding-billing.md#provisioned-model).
az storage share-rm update \
+## Delete a file share
+To delete an Azure file share, you can use the Azure portal, Azure PowerShell, or Azure CLI. Shares can be recovered within the [soft delete](storage-files-prevent-file-share-deletion.md) retention period.
+
+# [Portal](#tab/azure-portal)
+1. Open the [Azure portal](https://portal.azure.com), and navigate to the storage account that contains the file share you want to delete.
+1. Open the storage account and select **File shares**.
+1. Select the file share you want to delete.
+1. Select **Delete share**.
+1. Check the box confirming that you agree to the deletion of the file share and all its content.
+1. Select **Delete**.
++
+# [PowerShell](#tab/azure-powershell)
+1. Log in to your Azure account and specify your tenant ID.
+
+ ```azurepowershell
+ Login-AzAccount -TenantId <YourTenantID>
+ ```
+
+1. Run the following script. Replace `<YourStorageAccountName>`, `<YourStorageAccountKey>`, and `<FileShareName>` with your information. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` cmdlet.
+
+ ```azurepowershell
+ $context = New-AzStorageContext -StorageAccountName <YourStorageAccountName> -StorageAccountKey <YourStorageAccountKey>
+ Remove-AzStorageShare -Context $context -Name "<FileShareName>"
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+You can delete an Azure file share with the [`az storage share delete`](/cli/azure/storage/share#az-storage-share-delete) command. Replace `<yourFileShareName>` and `<yourStorageAccountName>` with your information.
+
+```azurecli
+
+az storage share delete \
+ --name <yourFileShareName> \
+ --account-name <yourStorageAccountName>
+```
+++ ## Next steps-- [Plan for a deployment of Azure Files](storage-files-planning.md) or [Plan for a deployment of Azure File Sync](../file-sync/file-sync-planning.md). -- [Networking overview](storage-files-networking-overview.md).-- Connect and mount a file share on [Windows](storage-how-to-use-files-windows.md), [macOS](storage-how-to-use-files-mac.md), and [Linux](storage-how-to-use-files-linux.md).
+- [Planning for an Azure Files deployment](storage-files-planning.md) or [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md).
+- [Azure Files networking overview](storage-files-networking-overview.md).
+- Mount an SMB file share on [Windows](storage-how-to-use-files-windows.md), [macOS](storage-how-to-use-files-mac.md), or [Linux](storage-how-to-use-files-linux.md).
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-performance.md
To confirm whether your share is being throttled, you can access and use Azure m
#### Solution -- If you're using a standard file share, [enable large file shares](storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) on your storage account and [increase the size of file share quota to take advantage of the large file share support](storage-how-to-create-file-share.md#expand-existing-file-shares). Large file shares support great IOPS and bandwidth limits; see [Azure Files scalability and performance targets](storage-files-scale-targets.md) for details.
+- If you're using a standard file share, [enable large file shares](storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account) on your storage account and [increase the size of file share quota to take advantage of the large file share support](storage-how-to-create-file-share.md#expand-existing-file-shares). Large file shares support great IOPS and bandwidth limits; see [Azure Files scalability and performance targets](storage-files-scale-targets.md) for details.
- If you're using a premium file share, increase the provisioned file share size to increase the IOPS limit. To learn more, see the [Understanding provisioning for premium file shares](./understanding-billing.md#provisioned-model). ### Cause 2: Metadata or namespace heavy workload
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
This article lists the known limitations and issues with Azure Synapse Link for
This is the list of known limitations for Azure Synapse Link for SQL. ### Azure SQL DB and SQL Server 2022
-* Users must use an Azure Synapse Analytics workspace created on or after May 24, 2022, to get access to Azure Synapse Link for SQL functionality.
-* Running Azure Synapse Analytics in a managed virtual network isn't supported. Users need to check "Disable Managed virtual network" and "Allow connections from all IP addresses" when creating their workspace.
* Source tables must have primary keys. * The following data types aren't supported for primary keys in the source tables: * real
This is the list of known limitations for Azure Synapse Link for SQL.
* System tables can't be replicated. * The security configuration from the source database will **NOT** be reflected in the target dedicated SQL pool. * Enabling Azure Synapse Link for SQL will create a new schema called `changefeed`. Don't use this schema, as it is reserved for system use.
-* Source tables with non-default collations: UTF8, Japanese can't be replicated to Synapse. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
+* Source tables with collations that are unsupported by Synapse SQL dedicated pool, such as UTF8 and certain Japanese collations, canΓÇÖt be replicated. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
* Single row updates (including off-page storage) of > 370MB are not supported. ### Azure SQL DB only * Azure Synapse Link for SQL isn't supported on Free, Basic or Standard tier with fewer than 100 DTUs. * Azure Synapse Link for SQL isn't supported on SQL Managed Instances.
-* Users need to check "Allow Azure services and resources to access this server" in the firewall settings of their source database server.
* Service principal isn't supported for authenticating to source Azure SQL DB, so when creating Azure SQL DB linked Service, choose SQL authentication, user-assigned managed identity (UAMI) or service assigned managed Identity (SAMI). * Azure Synapse Link can't be enabled on the secondary database once a GeoDR failover has happened if the secondary database has a different name from the primary database. * If you enabled Azure Synapse Link for SQL on your database as an Microsoft Azure Active Directory (Azure AD) user, Point-in-time restore (PITR) will fail. PITR will only work when you enable Azure Synapse Link for SQL on your database as a SQL user.
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
To install one time updates on a single VM, follow these steps:
1. Select **Install now** to proceed with the one-time updates.
-1. In **Install one-time updates**, select **+Add machine** to add the machine for deploying one-time.
+ - In **Install one-time updates**, select **+Add machine** to add the machine for deploying one-time.
-1. In **Select resources**, choose the machine and select **Add**.
+ - In **Select resources**, choose the machine and select **Add**.
1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, its necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine. > [!NOTE]
- > Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in update center management (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
+ > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in update center management (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
+ > - Update management center (preview) doesn't support driver updates.
+ - Select **+Include update classification**, in the **Include update classification** select the appropriate classification(s) that must be installed on your machines.
To install one time updates on a single VM, follow these steps:
1. Select to **Install now** to proceed with installing updates.
-1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 6 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
+1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
A notification appears to inform you the activity has started and another is created when it's completed. When it is successfully completed, you can view the installation operation results in **History**. The status of the operation can be viewed at any time from the [Azure Activity log](../azure-monitor/essentials/activity-log.md).
To install one time updates on a single VM, follow these steps:
1. Under **Operations**, select **Updates**. 1. In **Updates**, select **Go to Updates using Update Center**. 1. In **Updates (Preview)**, select **One-time update** to install the updates.
-1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 6 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
+1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
To schedule recurring updates on a single VM, follow these steps:
1. In the **Machines** page, select your machine and select **Next** to continue.
+1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule.
+
+ > [!Note]
+ > Update management center (preview) doesn't support driver updates.
+ 1. In the **Tags** page, assign tags to maintenance configurations. 1. In the **Review + Create** page, verify your update deployment options and select **Create**.
To schedule recurring updates at scale, follow these steps:
1. In the **Machines** page, verify if the selected machines are listed. You can add or remove machines from the list. Select **Next** to continue.
+1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule.
+
+ > [!Note]
+ > Update management center (preview) doesn't support driver updates.
++ 1. In the **Tags** page, assign tags to maintenance configurations. 1. In the **Review + Create** page, verify your update deployment options and select **Create**.
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
This article details the Windows and Linux operating systems supported and syste
### Operating system updates Update management center (preview) supports operating system updates for both Windows and Linux.
+> [!NOTE]
+> Update management center (preview) doesn't support driver Updates.
+ ### First party updates on Windows By default, the Windows Update client is configured to provide updates only for Windows. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other products, including security patches for Microsoft SQL Server and other Microsoft software. You can configure this option if you have downloaded and copied the latest [Administrative template files](https://support.microsoft.com/help/3087759/how-to-create-and-manage-the-central-store-for-group-policy-administra) available for Windows 2016 and later.
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Title: Azure Virtual Desktop for Azure Stack HCI (preview) overview
description: Overview of Azure Virtual Desktop for Azure Stack HCI (preview). Previously updated : 10/14/2022 Last updated : 10/20/2022
-# Azure Virtual Desktop for Azure Stack HCI (preview)
+# Azure Virtual Desktop for Azure Stack HCI overview (preview)
Azure Virtual Desktop for Azure Stack HCI (preview) lets you deploy Azure Virtual Desktop session hosts on your on-premises Azure Stack HCI infrastructure. You manage your session hosts from the Azure portal.
The following issues affect the preview version of Azure Virtual Desktop for Azu
- Azure Virtual Desktop for Azure Stack HCI doesn't currently support host pools containing both cloud and on-premises session hosts. Each host pool in the deployment must have only one type of host pool. -- When connecting to a Windows 10 or Windows 11 Enterprise multi-session virtual desktop, users may see activation issues, such as a desktop watermark saying "Activate Windows", even if they have an eligible license.- - Session hosts on Azure Stack HCI don't support certain cloud-only Azure services. - Because Azure Stack HCI supports so many types of hardware and on-premises networking capabilities that performance and user density may vary widely between session hosts running in the Azure cloud. Azure Virtual Desktop's [virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs) are broad, so you should only use them for initial performance estimates.
virtual-desktop Azure Stack Hci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci.md
Title: Set up Azure Virtual Desktop for Azure Stack HCI (preview) - Azure
description: How to set up Azure Virtual Desktop for Azure Stack HCI (preview). Previously updated : 10/17/2022 Last updated : 10/20/2022
In order to use Azure Virtual Desktop for Azure Stack HCI, you'll need the follo
- An [Azure Stack HCI cluster registered with Azure](/azure-stack/hci/deploy/register-with-azure) in the same subscription. -- Azure Arc VM management should be set up on the Azure Stack HCI cluster. For more information, see [VM provisioning through Azure portal on Azure Stack HCI (preview)](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+- Azure Arc virtual machine (VM) management should be set up on the Azure Stack HCI cluster. For more information, see [VM provisioning through Azure portal on Azure Stack HCI (preview)](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
- [An on-premises Active Directory (AD) synced with Azure Active Directory](/azure/architecture/reference-architectures/identity/azure-ad). The AD domain should resolve using DNS. For more information, see [Prerequisites for Azure Virtual Desktop](prerequisites.md#network).
Follow the steps below for a simplified process of setting up Azure Virtual Desk
> [!NOTE] > For more session host configurations, use the Full Configuration [(CreateHciHostpoolTemplate.json)](https://github.com/Azure/RDS-Templates/blob/master/ARM-wvd-templates/HCI/CreateHciHostpoolTemplate.json) template, which offers all the features that can be used to deploy Azure Virtual Desktop on Azure Stack HCI.
+## Windows OS activation
+
+Windows VMs must be licensed and activated before you can use them on Azure Stack HCI.
+
+For activating your multi-session OS VMs (Windows 10, Windows 11, or later), enable Azure Benefits on the VM once it is created. Make sure that Azure Benefits are also enabled on the host computer. For more information, see [Azure Benefits on Azure Stack HCI](/azure-stack/hci/manage/azure-benefits).
+
+> [!NOTE]
+> You must manually enable access for each VM that requires Azure Benefits.
+
+For all other OS images (such as Windows Server or single-session OS), Azure Benefits is not required. Continue to use the existing activation methods. For more information, see [Activate Windows Server VMs on Azure Stack HCI](/azure-stack/hci/manage/vm-activate).
++ ## Optional configuration Now that you've set up Azure Virtual Desktop for Azure Stack HCI, here are a few optional things you can do depending on your deployment needs:
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
You can achieve the direct line of sight connectivity required to use RDP Shortp
To use RDP Shortpath for managed networks, you must enable a UDP listener on your session hosts. By default, port **3390** is used, although you can use a different port.
-The following diagram gives a high-level overview of the RDP Shortpath network connection:
+The following diagram gives a high-level overview of the network connections when using RDP Shortpath for managed networks and session hosts joined to an Active Directory domain.
### Connection sequence
There are four primary components used to establish the RDP Shortpath data flow
> [!TIP] > RDP Shortpath for public networks will work automatically without any additional configuration, providing networks and firewalls allow the traffic through and RDP transport settings in the Windows operating system for session hosts and clients are using their default values.
+The following diagram gives a high-level overview of the network connections when using RDP Shortpath for public networks and session hosts joined to Azure Active Directory (Azure AD).
++ ### Network Address Translation and firewalls Most Azure Virtual Desktop clients run on computers on the private network. Internet access is provided through a Network Address Translation (NAT) gateway device. Therefore, the NAT gateway modifies all network requests from the private network and destined to the Internet. Such modification intends to share a single public IP address across all of the computers on the private network.
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Title: Automatic instance repairs with Azure virtual machine scale sets
+ Title: Automatic instance repairs with Azure Virtual Machine Scale Sets
description: Learn how to configure automatic repairs policy for VM instances in a scale set--++ Previously updated : 02/28/2020-- Last updated : 10/19/2022++
-# Automatic instance repairs for Azure virtual machine scale sets
+# Automatic instance repairs for Azure Virtual Machine Scale Sets
-Enabling automatic instance repairs for Azure virtual machine scale sets helps achieve high availability for applications by maintaining a set of healthy instances. If an instance in the scale set is found to be unhealthy as reported by [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md), then this feature automatically performs instance repair by deleting the unhealthy instance and creating a new one to replace it.
+Enabling automatic instance repairs for Azure Virtual Machine Scale Sets helps achieve high availability for applications by maintaining a set of healthy instances. The [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) may find that an instance is unhealthy. Automatic instance repairs will automatically perform instance repairs by deleting the unhealthy instance and creating a new one to replace it.
## Requirements for using automatic instance repairs **Enable application health monitoring for scale set**
-The scale set should have application health monitoring for instances enabled. This can be done using either [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). Only one of these can be enabled at a time. The application health extension or the load balancer probes ping the application endpoint configured on virtual machine instances to determine the application health status. This health status is used by the scale set orchestrator to monitor instance health and perform repairs when required.
+The scale set should have application health monitoring for instances enabled. Health monitoring can be done using either [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md), where only one can be enabled at a time. The application health extension or the load balancer probes ping the application endpoint configured on virtual machine instances to determine the application health status. This health status is used by the scale set orchestrator to monitor instance health and perform repairs when required.
**Configure endpoint to provide health status** Before enabling automatic instance repairs policy, ensure that the scale set instances have application endpoint configured to emit the application health status. When an instance returns status 200 (OK) on this application endpoint, then the instance is marked as "Healthy". In all other cases, the instance is marked "Unhealthy", including the following scenarios: -- When there is no application endpoint configured inside the virtual machine instances to provide application health status
+- When there's no application endpoint configured inside the virtual machine instances to provide application health status
- When the application endpoint is incorrectly configured-- When the application endpoint is not reachable
+- When the application endpoint isn't reachable
For instances marked as "Unhealthy", automatic repairs are triggered by the scale set. Ensure the application endpoint is correctly configured before enabling the automatic repairs policy in order to avoid unintended instance repairs, while the endpoint is getting configured.
Resource or subscription moves are currently not supported for scale sets when a
This feature is currently not supported for service fabric scale sets.
+**Restriction for VMs with provisioning errors**
+
+Automatic repairs doesn't currently support scenarios where a VM instance is marked *Unhealthy* due to a provisioning failure. VMs must be successfully initialized to enable health monitoring and automatic repair capabilities.
+ ## How do automatic instance repairs work? Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, then the scale set performs repair action by deleting the unhealthy instance and creating a new one to replace it. The latest virtual machine scale set model is used to create the new instance. This feature can be enabled in the virtual machine scale set model by using the *automaticRepairsPolicy* object. ### Batching
-The automatic instance repair operations are performed in batches. At any given time, no more than 5% of the instances in the scale set are repaired through the automatic repairs policy. This helps avoid simultaneous deletion and re-creation of a large number of instances if found unhealthy at the same time.
+The automatic instance repair operations are performed in batches. At any given time, no more than 5% of the instances in the scale set are repaired through the automatic repairs policy. This process helps avoid simultaneous deletion and re-creation of a large number of instances if found unhealthy at the same time.
### Grace period
-When an instance goes through a state change operation because of a PUT, PATCH or POST action performed on the scale set (for example reimage, redeploy, update, etc.), then any repair action on that instance is performed only after waiting for the grace period. Grace period is the amount of time to allow the instance to return to healthy state. The grace period starts after the state change has completed. This helps avoid any premature or accidental repair operations. The grace period is honored for any newly created instance in the scale set (including the one created as a result of repair operation). Grace period is specified in minutes in ISO 8601 format and can be set using the property *automaticRepairsPolicy.gracePeriod*. Grace period can range between 10 minutes and 90 minutes, and has a default value of 30 minutes.
+When an instance goes through a state change operation because of a PUT, PATCH, or POST action performed on the scale set, then any repair action on that instance is performed only after the grace period ends. Grace period is the amount of time to allow the instance to return to healthy state. The grace period starts after the state change has completed, which helps avoid any premature or accidental repair operations. The grace period is honored for any newly created instance in the scale set, including the one created as a result of repair operation. Grace period is specified in minutes in ISO 8601 format and can be set using the property *automaticRepairsPolicy.gracePeriod*. Grace period can range between 10 minutes and 90 minutes, and has a default value of 30 minutes.
### Suspension of Repairs
-Virtual machine scale sets provide the capability to temporarily suspend automatic instance repairs if needed. The *serviceState* for automatic repairs under the property *orchestrationServices* in instance view of virtual machine scale set shows the current state of the automatic repairs. When a scale set is opted into automatic repairs, the value of parameter *serviceState* is set to *Running*. When the automatic repairs are suspended for a scale set, the parameter *serviceState* is set to *Suspended*. If *automaticRepairsPolicy* is defined on a scale set but the automatic repairs feature is not enabled, then the parameter *serviceState* is set to *Not Running*.
+Virtual Machine Scale Sets provide the capability to temporarily suspend automatic instance repairs if needed. The *serviceState* for automatic repairs under the property *orchestrationServices* in instance view of virtual machine scale set shows the current state of the automatic repairs. When a scale set is opted into automatic repairs, the value of parameter *serviceState* is set to *Running*. When the automatic repairs are suspended for a scale set, the parameter *serviceState* is set to *Suspended*. If *automaticRepairsPolicy* is defined on a scale set but the automatic repairs feature isn't enabled, then the parameter *serviceState* is set to *Not Running*.
If newly created instances for replacing the unhealthy ones in a scale set continue to remain unhealthy even after repeatedly performing repair operations, then as a safety measure the platform updates the *serviceState* for automatic repairs to *Suspended*. You can resume the automatic repairs again by setting the value of *serviceState* for automatic repairs to *Running*. Detailed instructions are provided in the section on [viewing and updating the service state of automatic repairs policy](#viewing-and-updating-the-service-state-of-automatic-instance-repairs-policy) for your scale set.
The automatic instance repairs process works as follows:
1. [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) ping the application endpoint inside each virtual machine in the scale set to get application health status for each instance. 2. If the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy".
-3. When an instance is found to be unhealthy, the scale set triggers a repair action by deleting the unhealthy instance and creating a new one to replace it.
+3. When an instance is found to be unhealthy, the scale set triggers a repair action by deleting the unhealthy instance, and creating a new one to replace it.
4. Instance repairs are performed in batches. At any given time, no more than 5% of the total instances in the scale set are repaired. If a scale set has fewer than 20 instances, the repairs are done for one unhealthy instance at a time. 5. The above process continues until all unhealthy instance in the scale set are repaired. ## Instance protection and automatic repairs
-If an instance in a scale set is protected by applying one of the [protection policies](./virtual-machine-scale-sets-instance-protection.md), then automatic repairs are not performed on that instance. This applies to both the protection policies: *Protect from scale-in* and *Protect from scale-set* actions.
+If an instance in a scale set is protected by applying one of the [protection policies](./virtual-machine-scale-sets-instance-protection.md), then automatic repairs aren't performed on that instance. This behavior applies to both the protection policies: *Protect from scale-in* and *Protect from scale-set* actions.
## Terminate notification and automatic repairs
-If the [terminate notification](./virtual-machine-scale-sets-terminate-notification.md) feature is enabled on a scale set, then during automatic repair operation, the deletion of an unhealthy instance follows the terminate notification configuration. A terminate notification is sent through Azure metadata service ΓÇô scheduled events ΓÇô and instance deletion is delayed for the duration of the configured delay timeout. However, the creation of a new instance to replace the unhealthy one does not wait for the delay timeout to complete.
+If the [terminate notification](./virtual-machine-scale-sets-terminate-notification.md) feature is enabled on a scale set, then during automatic repair operation, the deletion of an unhealthy instance follows the terminate notification configuration. A terminate notification is sent through Azure metadata service ΓÇô scheduled events ΓÇô and instance deletion is delayed during the configured delay timeout. However, the creation of a new instance to replace the unhealthy one doesn't wait for the delay timeout to complete.
## Enabling automatic repairs policy when creating a new scale set
-For enabling automatic repairs policy while creating a new scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. For newly created scale sets, any instance repairs are performed only after waiting for the duration of grace period. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the virtual machine scale set model.
+For enabling automatic repairs policy while creating a new scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. For newly created scale sets, any instance repairs are performed only after the grace period completes. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the virtual machine scale set model.
-You can also use this [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-automatic-repairs-slb-health-probe) to deploy a virtual machine scale set with load balancer health probe and automatic instance repairs enabled with a grace period of 30 minutes.
+You can also use this [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-automatic-repairs-slb-health-probe) to deploy a virtual machine scale set. The scale set has a load balancer health probe and automatic instance repairs enabled with a grace period of 30 minutes.
### Azure portal
The following steps enabling automatic repairs policy when creating a new scale
1. Locate the **Automatic repair policy** section. 1. Turn **On** the **Automatic repairs** option. 1. In **Grace period (min)**, specify the grace period in minutes, allowed values are between 30 and 90 minutes.
-1. When you are done creating the new scale set, select **Review + create** button.
+1. When you're done creating the new scale set, select **Review + create** button.
### REST API
az vmss create \
--automatic-repairs-grace-period 30 ```
-The above example uses an existing load balancer and health probe for monitoring application health status of instances. If you prefer to use an application health extension for monitoring instead, you can create a scale set, configure the application health extension and then enable the automatic instance repairs policy using the *az vmss update*, as explained in the next section.
+The above example uses an existing load balancer and health probe for monitoring application health status of instances. If you prefer using an application health extension for monitoring, you can do the following instead: create a scale set, configure the application health extension, and enable the automatic instance repairs policy. You can enable that policy by using the *az vmss update*, as explained in the next section.
## Enabling automatic repairs policy when updating an existing scale set
You can modify the automatic repairs policy of an existing scale set through the
1. Locate the **Automatic repair policy** section. 1. Turn **On** the **Automatic repairs** option. 1. In **Grace period (min)**, specify the grace period in minutes, allowed values are between 30 and 90 minutes.
-1. When you are done, select **Save**.
+1. When you're done, select **Save**.
### REST API
Update-AzVmss `
### Azure CLI 2.0
-The following is an example for updating the automatic instance repairs policy of an existing scale set, using *[az vmss update](/cli/azure/vmss#az-vmss-update)*.
+The following example demonstrates how to update the automatic instance repairs policy of an existing scale set, using *[az vmss update](/cli/azure/vmss#az-vmss-update)*.
```azurecli-interactive az vmss update \
az vmss get-instance-view \
--resource-group MyResourceGroup ```
-Use [set-orchestration-service-state](/cli/azure/vmss#az-vmss-set-orchestration-service-state) cmdlet to update the *serviceState* for automatic instance repairs. Once the scale set is opted into the automatic repair feature, then you can use this cmdlet to suspend or resume automatic repairs for you scale set.
+Use [set-orchestration-service-state](/cli/azure/vmss#az-vmss-set-orchestration-service-state) cmdlet to update the *serviceState* for automatic instance repairs. Once the scale set is opted into the automatic repair feature, then you can use this cmdlet to suspend or resume automatic repairs for your scale set.
```azurecli-interactive az vmss set-orchestration-service-state \
Get-AzVmss `
-InstanceView ```
-Use Set-AzVmssOrchestrationServiceState cmdlet to update the *serviceState* for automatic instance repairs. Once the scale set is opted into the automatic repair feature, you can use this cmdlet to suspend or resume automatic repairs for you scale set.
+Use Set-AzVmssOrchestrationServiceState cmdlet to update the *serviceState* for automatic instance repairs. Once the scale set is opted into the automatic repair feature, you can use this cmdlet to suspend or resume automatic repairs for your scale set.
```azurepowershell-interactive Set-AzVmssOrchestrationServiceState `
Set-AzVmssOrchestrationServiceState `
**Failure to enable automatic repairs policy**
-If you get a 'BadRequest' error with a message stating "Could not find member 'automaticRepairsPolicy' on object of type 'properties'", then check the API version used for virtual machine scale set. API version 2018-10-01 or higher is required for this feature.
+If you get a 'BadRequest' error with a message stating "Couldn't find member 'automaticRepairsPolicy' on object of type 'properties'", then check the API version used for virtual machine scale set. API version 2018-10-01 or higher is required for this feature.
**Instance not getting repaired even when policy is enabled**
-The instance could be in grace period. This is the amount of time to wait after any state change on the instance before performing repairs. This is to avoid any premature or accidental repairs. The repair action should happen once the grace period is completed for the instance.
+The instance could be in grace period. This period is the amount of time to wait after any state change on the instance before performing repairs, which helps avoid any premature or accidental repairs. The repair action should happen once the grace period is completed for the instance.
**Viewing application health status for scale set instances**
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-redundancy.md
Title: Redundancy options for Azure managed disks
description: Learn about zone-redundant storage and locally-redundant storage for Azure managed disks. Previously updated : 02/03/2022 Last updated : 10/19/2022
Azure managed disks offer two storage redundancy options, zone-redundant storage
## Locally-redundant storage for managed disks
-Locally-redundant storage (LRS) replicates your data three times within a single data center in the selected region. LRS protects your data against server rack and drive failures. To protect an LRS disk from a zonal failure like a natural disaster or other issues, take the following steps:
+Locally-redundant storage (LRS) replicates your data three times within a single data center in the selected region. LRS protects your data against server rack and drive failures. LRS disks provide at least 99.999999999% (11 9's) of durability over a given year. To protect an LRS disk from a zonal failure like a natural disaster or other issues, take the following steps:
- Use applications that can synchronously write data to two zones, and automatically failover to another zone during a disaster. - An example would be SQL Server Always On.
If your workflow doesn't support application-level synchronous writes across zon
## Zone-redundant storage for managed disks
-Zone-redundant storage (ZRS) synchronously replicates your Azure managed disk across three Azure availability zones in the region you select. Each availability zone is a separate physical location with independent power, cooling, and networking.
+Zone-redundant storage (ZRS) synchronously replicates your Azure managed disk across three Azure availability zones in the region you select. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS disks provide at least 99.9999999999% (12 9's) of durability over a given year.
A ZRS disk lets you recover from failures in availability zones. If a zone went down, a ZRS disk can be attached to a virtual machine (VM) in a different zone. ZRS disks can also be shared between VMs for improved availability with clustered or distributed applications like SQL FCI, SAP ASCS/SCS, or GFS2. A shared ZRS disk can be attached to primary and secondary VMs in different zones to take advantage of both ZRS and [availability zones](../availability-zones/az-overview.md). If your primary zone fails, you can quickly fail over to the secondary VM using [SCSI persistent reservation](disks-shared-enable.md#supported-scsi-pr-commands).
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
The VM Image Builder service is available in the following regions:
- East Asia - Korea Central - South Africa North
+- Qatar Central
- USGov Arizona (public preview) - USGov Virginia (public preview)
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
## New Features
-Many new features like ARM64, Accelerated Networking, TrustedVMSupported etc. are only supported through Azure Compute Gallery and not available for 'Managed images'. For a complete list of new features available through Azure Compute Gallery, please refer
-https://learn.microsoft.com/cli/azure/sig/image-version?view=azure-cli-latest#az-sig-image-version-create
+Many new features like ARM64, Accelerated Networking, TrustedVM etc. are only supported through Azure Compute Gallery and not available for 'Managed images'. For a complete list of new features available through Azure Compute Gallery, please refer
+https://learn.microsoft.com/cli/azure/sig/image-definition?view=azure-cli-latest#az-sig-image-definition-create
## Next steps
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
To disable the encryption, see [Disable encryption and remove the encryption ext
You can enable disk encryption on an existing or running Linux VM in Azure by using the [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-linux-vm-without-aad).
-1. Click **Deploy to Azure** on the Azure quickstart template.
+1. Click **Deploy to Azure** on the Azure Quickstart Template.
2. Select the subscription, resource group, resource group location, parameters, legal terms, and agreement. Click **Create** to enable encryption on the existing or running VM.
The **EncryptFormatAll** parameter reduces the time for Linux data disks to be e
>If you're setting this parameter while updating encryption settings, it might lead to a reboot before the actual encryption. In this case, you will also want to remove the disk you don't want formatted from the fstab file. Similarly, you should add the partition you want encrypt-formatted to the fstab file before initiating the encryption operation. ### EncryptFormatAll criteria
-The parameter goes though all partitions and encrypts them as long as they meet **all** of the criteria below:
+The parameter goes through all partitions and encrypts them as long as they meet **all** of the criteria below:
- Is not a root/OS/boot partition - Is not already encrypted - Is not a BEK volume
You can add a new data disk using [az vm disk attach](add-disk.md), or [through
### Enable encryption on a newly added disk with Azure CLI
- If the VM was previously encrypted with "All" then the --volume-type parameter should remain "All". All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS", then the --volume-type parameter should be changed to "All" so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data", then it can remain "Data" as demonstrated below. Adding and attaching a new data disk to a VM is not sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM prior to enabling encryption. On Linux the disk must be mounted in /etc/fstab with a [persistent block device name](/azure-docs-test-baseline-pr/virtual-machines/linux/troubleshoot-device-names-problems).
+ If the VM was previously encrypted with "All" then the --volume-type parameter should remain "All". All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS", then the --volume-type parameter should be changed to "All" so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data", then it can remain "Data" as demonstrated below. Adding and attaching a new data disk to a VM is not sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM prior to enabling encryption. On Linux the disk must be mounted in /etc/fstab with a [persistent block device name](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
In contrast to PowerShell syntax, the CLI does not require the user to provide a unique sequence version when enabling encryption. The CLI automatically generates and uses its own unique sequence version value.
Azure Disk Encryption does not work for the following Linux scenarios, features,
- Encrypting basic tier VM or VMs created through the classic VM creation method. - Disabling encryption on an OS drive or data drive of a Linux VM when the OS drive is encrypted.-- Encrypting the OS drive for Linux virtual machine scale sets.
+- Encrypting the OS drive for Linux Virtual Machine Scale Sets.
- Encrypting custom images on Linux VMs. - Integration with an on-premises key management system. - Azure Files (shared file system).
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
The location is the region where the custom image will be created. The following
- East Asia - Korea Central - South Africa North
+- Qatar Central
- USGov Arizona (Public Preview) - USGov Virginia (Public Preview)
Customize properties:
### File customizer
-The `File` customizer lets Image Builder download a file from a GitHub repo or Azure storage. The customizer supports both Linux and Windows. If you have an image build pipeline that relies on build artifacts, you can set the file customizer to download from the build share, and move the artifacts into the image.
+The `File` customizer lets Image Builder download a file from a GitHub repo or Azure storage. The customizer supports both Linux and Windows. If you have an image build pipeline that relies on build artifacts, you can set the file customizer to download from the build share, and move the artifacts into the image.
+ # [JSON](#tab/json)
File customizer properties:
- **sourceUri** - an accessible storage endpoint, this endpoint can be GitHub or Azure storage. You can only download one file, not an entire directory. If you need to download a directory, use a compressed file, then uncompress it using the Shell or PowerShell customizers. > [!NOTE]
- > If the sourceUri is an Azure Storage Account, irrespective if the blob is marked public, you'll to grant the Managed User Identity permissions to read access on the blob. See this [example](./image-builder-user-assigned-identity.md#create-a-resource-group) to set the storage permissions.
+ > If the sourceUri is an Azure Storage Account, irrespective if the blob is marked public, you'll need to grant the Managed User Identity permissions to read access on the blob. See this [example](./image-builder-user-assigned-identity.md#create-a-resource-group) to set the storage permissions.
- **destination** ΓÇô the full destination path and file name. Any referenced path and subdirectories must exist, use the Shell or PowerShell customizers to set up these paths up beforehand. You can use the script customizers to create the path.
This customizer is supported by Windows directories and Linux paths, but there a
If there's an error trying to download the file, or put it in a specified directory, then customize step will fail, and this error will be in the customization.log. > [!NOTE]
-> The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`.
+> The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`. For files that are in Azure storage, ensure that you assign an identity with permissions to view that file to the build VM by following the documentation here: [User Assigned Identity for the Image Builder Build VM](https://learn.microsoft.com/azure/virtual-machines/linux/image-builder-json?tabs=json%2Cazure-powershell#user-assigned-identity-for-the-image-builder-build-vm). Any file that is not stored in Azure must be publicly accessible for Azure Image Builder to be able to download it.
- **sha256Checksum** - generate the SHA256 checksum of the file locally, update the checksum value to lowercase, and Image Builder will validate the checksum during the deployment of the image template.
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
description: Overview of Azure managed disks, which handle the storage accounts
Previously updated : 04/22/2022 Last updated : 10/19/2022
Let's go over some of the benefits you gain by using managed disks.
### Highly durable and available
-Managed disks are designed for 99.999% availability. Managed disks achieve this by providing you with three replicas of your data, allowing for high durability. If one or even two replicas experience issues, the remaining replicas help ensure persistence of your data and high tolerance against failures. This architecture has helped Azure consistently deliver enterprise-grade durability for infrastructure as a service (IaaS) disks, with an industry-leading ZERO% annualized failure rate.
+Managed disks are designed for 99.999% availability. Managed disks achieve this by providing you with three replicas of your data, allowing for high durability. If one or even two replicas experience issues, the remaining replicas help ensure persistence of your data and high tolerance against failures. This architecture has helped Azure consistently deliver enterprise-grade durability for infrastructure as a service (IaaS) disks, with an industry-leading ZERO% annualized failure rate. Locally-redundant storage (LRS) disks provide at least 99.999999999% (11 9's) of durability over a given year and zone-redundant storage (ZRS) disks provide at least 99.9999999999% (12 9's) of durability over a given year.
### Simple and scalable VM deployment
virtual-machines Unmanaged Disks Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/unmanaged-disks-deprecation.md
Start planning your migration to Azure managed disks today.
## What resources are available for this migration? -- [Microsoft Q&A](https://github.com/MicrosoftDocs/azure-docs/blob/master/answers/topics/azure-virtual-machines-migration.html): Microsoft and community support for migration.
+- [Microsoft Q&A](/answers/topics/azure-virtual-machines-migration.html): Microsoft and community support for migration.
- [Azure Migration Support](https://ms.portal.azure.com/#create/Microsoft.Support/Parameters/%7B%22pesId%22:%226f16735c-b0ae-b275-ad3a-03479cfa1396%22,%22supportTopicId%22:%221135e3d0-20e2-aec5-4ef0-55fd3dae2d58%22%7D): Dedicated support team for technical assistance during migration. - [Microsoft FastTrack](https://www.microsoft.com/fasttrack): FastTrack can assist eligible customers with planning and execution of this migration. [Nominate yourself](https://azure.microsoft.com/programs/azure-fasttrack/#nomination). - If your company/organization has partnered with Microsoft or works with Microsoft representatives such as cloud solution architects (CSAs) or technical account managers (TAMs), please work with them for additional resources for migration.
virtual-machines Expose Sap Odata To Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-odata-to-power-query.md
Integrations between SAP products and the Microsoft 365 portfolio range from cus
The mechanism described in this article uses the standard built-in OData capabilities of Power Query and puts emphasis for SAP landscapes deployed on Azure. Address on-premises landscapes with the Azure API Management [self-hosted Gateway](../../../api-management/self-hosted-gateway-overview.md).
-For more information on which Microsoft products support Power Query, see [the Power Query documentation](/power-query-what-is-power-query#where-can-you-use-power-query).
+For more information on which Microsoft products support Power Query, see [the Power Query documentation](/power-query/power-query-what-is-power-query#where-can-you-use-power-query).
## Setup considerations End users have a choice between local desktop or web-based clients (for instance Excel or Power BI). The client execution environment needs to be considered for the network path between the client application and the target SAP workload. Network access solutions such as VPN aren't in scope for apps like Excel for the web.
-[Azure API Management](/services/api-management/) reflects local and web-based environment needs with different deployment modes that can be applied to Azure landscapes ([internal](../../../api-management/api-management-using-with-internal-vnet.md?tabs=stv2)
+[Azure API Management](/azure/api-management/) reflects local and web-based environment needs with different deployment modes that can be applied to Azure landscapes ([internal](../../../api-management/api-management-using-with-internal-vnet.md?tabs=stv2)
or [external](../../../api-management/api-management-using-with-vnet.md?tabs=stv2)). `Internal` refers to instances that are fully restricted to a private virtual network whereas `external` retains public access to Azure API Management. On-premises installations require a hybrid deployment to apply the approach as is using the Azure API Management [self-hosted Gateway](../../../api-management/self-hosted-gateway-overview.md). Power Query requires matching API service URL and Azure AD application ID URL. Configure a [custom domain for Azure API Management](../../../api-management/configure-custom-domain.md) to meet the requirement.
The highlighted button triggers a flow that forwards the OData PATCH request to
[Understand Azure Application Gateway and Web Application Firewall for SAP](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
-[Automate API deployments with APIOps](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops)
+[Automate API deployments with APIOps](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops)
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
Dispatching approaches include traditional reverse proxies like Apache, platform
## Primary Azure services
-[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-front-end-ip.md) and [internal private](../../../application-gateway/configuration-front-end-ip.md) HTTP routing, along with [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md). Examples include [security](../../../application-gateway/features.md) and [autoscaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-frontend-ip.md) and [internal private](../../../application-gateway/configuration-frontend-ip.md) HTTP routing, along with [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md). Examples include [security](../../../application-gateway/features.md) and [autoscaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
Azure Application Gateway is focused on exposing web applications, so it offers a web application firewall (WAF). Workloads in other virtual networks that will communicate with SAP through Azure Application Gateway can be connected via [private links](../../../application-gateway/private-link-configure.md), even across tenants.
virtual-network Configure Public Ip Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-application-gateway.md
Azure Application Gateway is a web traffic load balancer that manages traffic to your web applications. Application Gateway makes routing decisions based on attributes of an HTTP request. Examples of attributes such as URI path or host headers. The frontend of an Application Gateway is the connection point for the applications in its backend pool.
-An Application Gateway frontend can be a private IP address, public IP address, or both. The V1 SKU of Application Gateway supports basic dynamic public IPs. The V2 SKU supports standard SKU public IPs that are static only. Application Gateway V2 SKU doesn't support an internal IP address as it's only frontend. For more information, see [Application Gateway front-end IP address configuration](../../application-gateway/configuration-front-end-ip.md).
+An Application Gateway frontend can be a private IP address, public IP address, or both. The V1 SKU of Application Gateway supports basic dynamic public IPs. The V2 SKU supports standard SKU public IPs that are static only. Application Gateway V2 SKU doesn't support an internal IP address as it's only frontend. For more information, see [Application Gateway frontend IP address configuration](../../application-gateway/configuration-frontend-ip.md).
In this article, you'll learn how to create an Application Gateway using an existing public IP in your subscription.
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
Title: Create a custom IP address prefix - Azure CLI
+ Title: Create a custom IPv4 address prefix - Azure CLI
description: Learn about how to create a custom IP address prefix using the Azure CLI
Last updated 03/31/2022
-# Create a custom IP address prefix using the Azure CLI
+# Create a custom IPv4 address prefix using the Azure CLI
-A custom IP address prefix enables you to bring your own IP ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
The steps in this article detail the process to:
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - This tutorial requires version 2.28 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed.- - Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.--- A customer owned IP range to provision in Azure.
+- A customer owned IPv4 range to provision in Azure.
- A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours. > [!NOTE]
The steps in this article detail the process to:
## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IP address range.
+To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IPv4 address range.
### Requirements and prefix readiness
To utilize the Azure BYOIP feature, you must perform the following steps prior t
For this ROA:
- * The Origin AS must be listed as 8075.
+ * The Origin AS must be listed as 8075 for the Public Cloud. (If the range will be onboarded to the US Gov Cloud, the Origin AS must be listed as 8070.)
* The validity end date (expiration date) needs to account for the time you intend to have the prefix advertised by Microsoft. Some RIRs don't present validity end date as an option and or choose the date for you.
To utilize the Azure BYOIP feature, you must perform the following steps prior t
To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
-The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning.
+The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning to the Public cloud.
> [!NOTE] > Execute the following commands in PowerShell with OpenSSL installed.
As before, the operation is asynchronous. Use [az network custom-ip prefix show]
> The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]
-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in-- see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
## Next steps
virtual-network Create Custom Ip Address Prefix Ipv6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6.md
+
+ Title: Create a custom IPv6 address prefix
+
+description: Learn about how to create a custom IPv6 address prefix using Azure PowerShell
++++ Last updated : 03/31/2022++
+# Create a custom IPv6 address prefix using Azure PowerShell
+
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+
+## Differences between using BYOIPv4 and BYOIPv6
+
+> [!IMPORTANT]
+> Onboarded custom IPv6 address prefixes are have several unique attributes which make them different than custom IPv4 address prefixes.
+
+* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Note that global ranges must be /48 in size, while regional ranges must always be /64 size.
+
+* Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes.
+
+* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell.
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Ensure your Az.Network module is 4.21.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
+- A customer owned IP range to provision in Azure.
+ - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Please refer to the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md#pre-provisioning-steps) for details. Note all these steps should be completed for the IPv6 global (parent) range.
+
+## Provisioning for IPv6
+
+The following steps display the modified steps for provisioning a sample global (parent) IPv6 range (2a05:f500:2::/48) and regional (child) IPv6 ranges. Note that some of the steps have been abbreviated or condensed from the [IPv4 instructions](create-custom-ip-address-prefix-powershell.md) to focus on the differences between IPv4 and IPv6.
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the global range resource.
+
+> [!IMPORTANT]
+> Although the resource for the global range will be associated with a region, the prefix will be advertised by the Microsoft WAN globally.
+
+ ```azurepowershell-interactive
+$rg =@{
+ Name = 'myResourceGroup'
+ Location = 'WestUS2'
+}
+New-AzResourceGroup @rg
+```
+
+### Provision a global custom IPv6 address prefix
+
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. (The `-AuthorizationMessage` and `-SignedMessage` parameters are constructed in the same manner as they are for IPv4; for more information, see [Create a custom IP prefix - PowerShell](create-custom-ip-address-prefix-powershell.md).) Note that no zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones).
+
+ ```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomIPv6GlobalPrefix'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'WestUS'
+ CIDR = '2a05:f500:2::/48'
+ AuthorizationMessage = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|2a05:f500:2::/48|yyyymmdd'
+ SignedMessage = $byoipauthsigned
+}
+$myCustomIpPrefix = New-AzCustomIPPrefix @prefix
+```
+
+### Provision a regional custom IPv6 address prefix
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+
+ ```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomIPv6RegionalPrefix'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'EastUS2'
+ CIDR = '2a05:f500:2:1::/64'
+}
+$myCustomIpPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3
+```
+Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
+
+> [!IMPORTANT]
+> Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range.
+
+### Commission the custom IPv6 address prefixes
+
+When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IPv6 prefixes in their respective regions. Create public IPv6 prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IPv6 prefix and test connectivity to the IPs within the region. Repeat for each regional custom IPv6 prefix.
+3. After all regional custom IPv6 prefixes (and derived prefixes/IPs) have been verified to work as expected, commission the global custom IPv6 prefix, which will advertise the larger range to the Internet.
+
+Using the example ranges above, the command sequence would be:
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPv6RegionalPrefix.Id -Commission
+```
+Followed by:
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPv6GlobalPrefix.Id -Commission
+```
+
+It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, note that this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md).
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
Title: Create a custom IP address prefix - Azure portal
+ Title: Create a custom IPv4 address prefix - Azure portal
description: Learn about how to onboard a custom IP address prefix using the Azure portal
Last updated 03/31/2022
-# Create a custom IP address prefix using the Azure portal
+# Create a custom IPv4 address prefix using the Azure portal
-A custom IP address prefix enables you to bring your own IP ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A customer owned IP range to provision in Azure.
+- A customer owned IPv4 range to provision in Azure.
- A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours. > [!NOTE]
The steps in this article detail the process to:
## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IP address range.
+To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IPv4 address range.
### Requirements and prefix readiness
To utilize the Azure BYOIP feature, you must perform the following steps prior t
For this ROA:
- * The Origin AS must be listed as 8075.
+ * The Origin AS must be listed as 8075 for the Public Cloud. (If the range will be onboarded to the US Gov Cloud, the Origin AS must be listed as 8070.)
* The validity end date (expiration date) needs to account for the time you intend to have the prefix advertised by Microsoft. Some RIRs don't present validity end date as an option and or choose the date for you.
To utilize the Azure BYOIP feature, you must perform the following steps prior t
To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
-The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning.
+The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning to the Public cloud.
> [!NOTE] > Execute the following commands in PowerShell with OpenSSL installed.
The operation is asynchronous. You can check the status by reviewing the **Commi
> The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]
-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in-- see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
## Next steps
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
Title: Create a custom IP address prefix - Azure PowerShell
-description: Learn about how to create a custom IP address prefix using Azure PowerShell
+description: Learn about how to create a custom IPv4 address prefix using Azure PowerShell
Last updated 03/31/2022
-# Create a custom IP address prefix using Azure PowerShell
+# Create a custom IPv4 address prefix using Azure PowerShell
-A custom IP address prefix enables you to bring your own IP ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
The steps in this article detail the process to:
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az. Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az. Network" if necessary.-- A customer owned IP range to provision in Azure.
+- Ensure your Az.Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.
+- A customer owned IPv4 range to provision in Azure.
- A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
If you choose to install and use PowerShell locally, this article requires the A
## Pre-provisioning steps
-To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IP address range.
+To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IPv4 address range.
### Requirements and prefix readiness
To utilize the Azure BYOIP feature, you must perform the following steps prior t
For this ROA:
- * The Origin AS must be listed as 8075.
+ * The Origin AS must be listed as 8075 for the Public Cloud. (If the range will be onboarded to the US Gov Cloud, the Origin AS must be listed as 8070.)
* The validity end date (expiration date) needs to account for the time you intend to have the prefix advertised by Microsoft. Some RIRs don't present validity end date as an option and or choose the date for you.
To utilize the Azure BYOIP feature, you must perform the following steps prior t
To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
-The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning.
+The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning to the Public cloud.
> [!NOTE] > Execute the following commands in PowerShell with OpenSSL installed.
As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell
> The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]
-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in-- see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
## Next steps
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
When ready, you can issue the command to have your range advertised from Azure a
* A custom IP prefix must be associated with a single Azure region.
-* The minimum size of an IP range is /24.
-
-* IPv6 is currently not supported for custom IP prefixes.
+* An IPv4 range can be equal or between /21 to /24. An IPv6 range can be equal or between /46 to /48.
* Custom IP prefixes do not currently support derivation of IPs with Internet Routing Preference or that use Global Tier (for cross-region load-balancing).
-* In regions with [availability zones](../../availability-zones/az-overview.md), a custom IP prefix must be specified as either zone-redundant or assigned to a specific zone. It can't be created with no zone specified in these regions. All IPs from the prefix must have the same zonal properties.
+* In regions with [availability zones](../../availability-zones/az-overview.md), a custom IPv4 prefix (or a regional custom IPv6 prefix) must be specified as either zone-redundant or assigned to a specific zone. It can't be created with no zone specified in these regions. All IPs from the prefix must have the same zonal properties.
* The advertisements of IPs from a custom IP prefix over Azure ExpressRoute aren't currently supported.
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
A custom IP address prefix is a contiguous range of IP addresses owned by an ext
This article explains how to:
+* Use the "regional commissioning" feature to safely migrate an active prefix to Azure
+ * Create public IP prefixes from provisioned custom IP prefixes * Migrate active IP prefixes from outside Microsoft
Use the following CLI and PowerShell commands to create public IP prefixes with
|Tool|Command| |||
-|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create)|
+|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create)|
|PowerShell|[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix)| > [!NOTE]
If the provisioned range is being advertised to the Internet by another network,
* Alternatively, the ranges can be commissioned first and then changed. This process won't work for all resource types with public IPs. In those cases, a new resource with the provisioned public IP must be created.
+### Use the regional commissioning feature (PowerShell only)
+
+When a custom IP prefix transitions to a fully **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network. If the range is currently being advertised to the Internet from a location other than Microsoft at the same time, there is the potential for BGP routing instability or traffic loss. In order to ease the transition for a range that is currently "live" outside of Azure, you can utilize a *regional commissioning* feature, which will put an onboarded range into a **CommissionedNoInternetAdvertise** state where it is only advertised from within a single Azure region. This allows for testing of all the attached infrastructure from within this region before advertising this range to the Internet, and fits well with Method 1 in the section above.
+
+Use the following example PowerShell to put a custom IP prefix range into this state.
+
+ ```azurepowershell-interactive
+Update-AzCustomIpPrefix
+(other arguments)
+-Commission
+-NoInternetAdvertise
+ ```
+ ## View a custom IP prefix To view a custom IP prefix, the following commands can be used in Azure CLI and Azure PowerShell. All public IP prefixes created under the custom IP prefix will be displayed.
Before you decommission a custom IP prefix, ensure it has no public IP prefixes
To migrate a custom IP prefix, it must first be deprovisioned from one region. A new custom IP prefix with the same CIDR can then be created in another region.
+### Any special considerations using IPv6
+
+Yes - there are multiple differences for provisioning and commissioning when using BYOIPv6. Please see [Create a custom IP address prefix - IPv6](create-custom-ip-address-prefix-ipv6.md) for more details.
+ ### Status messages When onboarding or removing a custom IP prefix from Azure, the **FailedReason** attribute of the resource will be updated. If the Azure portal is used, the message will be shown as a top-level banner. The following tables list the status messages when onboarding or removing a custom IP prefix.
+> [!NOTE]
+> If the FailedReason is OperationNotFailed -- then the custom IP prefix is in a stable state (e.g. Provisioned, Commissioned) with no apparent issues.
+ #### Validation failures | Failure message | Explanation | | | -- | | CustomerSignatureNotVerified | The signed message cannot be verified against the authentication message using the Whois/RDAP record for the prefix. |
-| NotAuthorizedToAdvertiseThisPrefix </br> or </br> ASN8075NotAllowedToAdvertise | ASN8075 is not authorized to advertise this prefix. Make sure your route origin authorization (ROA) is submitted correctly. Verify ROA. |
+| NotAuthorizedToAdvertiseThisPrefix </br> or </br> ASN8075NotAllowedToAdvertise | ASN8075 is not authorized to advertise this prefix. Make sure your route origin authorization (ROA) is submitted correctly. |
| PrefixRegisteredInAfricaAndSouthAmericaNotAllowedInOtherRegion | IP prefix is registered with AFRINIC or LACNIC. This prefix is not allowed to be used outside Africa/South America. | | NotFindRoutingRegistryToGetCertificate | Cannot find the public key for the IP prefix using the registration data access protocol (RDAP) of the regional internet registry (RIR). | | CIDRInAuthorizationMessageNotMatchCustomerIP | The CIDR in the authorization message does not match the submitted IP address. |
When onboarding or removing a custom IP prefix from Azure, the **FailedReason**
| Status message | Explanation | | | -- | | RegionalCommissioningInProgress | The range is being commissioned to advertise regionally within Azure. |
+| CommissionedNoInternetAdvertise | The range is now advertising regionally within Azure. |
| InternetCommissioningInProgress | The range is now advertising regionally within Azure and is being commissioned to advertise to the internet. | #### Decommission status
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
Deploying services within a virtual network provides the following capabilities:
|Category|Service| Dedicated<sup>1</sup> Subnet |-|-|-| | Compute | Virtual machines: [Linux](/previous-versions/azure/virtual-machines/linux/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](/previous-versions/azure/virtual-machines/windows/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Cloud Service](/previous-versions/azure/reference/jj156091(v=azure.100)): Virtual network (classic) only<br/> [Azure Batch](../batch/nodes-and-pools.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-network-vnet-and-firewall-configuration)| No <br/> No <br/> No <br/> No<sup>2</sup>
-| Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/> [Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)| Yes <br/> Yes <br/> Yes <br/> Yes <br/> No
+| Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[ExpressRoute Gateway](../expressroute/expressroute-about-virtual-network-gateways.md)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/> [Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)| Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes <br/> No
|Data|[RedisCache](../azure-cache-for-redis/cache-how-to-premium-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure SQL Managed Instance](/azure/azure-sql/managed-instance/connectivity-architecture-overview?toc=%2fazure%2fvirtual-network%2ftoc.json)| Yes <br/> Yes <br/> |Analytics | [Azure HDInsight](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks?toc=%2fazure%2fvirtual-network%2ftoc.json) |No<sup>2</sup> <br/> No<sup>2</sup> <br/> | Identity | [Azure Active Directory Domain Services](../active-directory-domain-services/tutorial-create-instance.md?toc=%2fazure%2fvirtual-network%2ftoc.json) |No <br/>
Deploying services within a virtual network provides the following capabilities:
| Virtual desktop infrastructure| [Azure Lab Services](../lab-services/how-to-connect-vnet-injection.md)<br/>| Yes <br/> <sup>1</sup> 'Dedicated' implies that only service specific resources can be deployed in this subnet and cannot be combined with customer VM/VMSSs <br/>
-<sup>2</sup> It is recommended as a best practice to have these services in a dedicated subnet, but not a mandatory requirement imposed by the service.
+<sup>2</sup> It is recommended as a best practice to have these services in a dedicated subnet, but not a mandatory requirement imposed by the service.
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
Use [az network nic list](/cli/azure/network/nic#az-network-nic-list) to view ne
az network nic list ```
-Use [az network nic show](/azure/network/nic#az-network-nic-show) to view the settings for a network interface.
+Use [az network nic show](/cli/azure/network/nic#az-network-nic-show) to view the settings for a network interface.
```azurecli az network nic show --name myNIC --resource-group myResourceGroup
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md
na Previously updated : 11/08/2019 Last updated : 10/20/2022
For the most up-to-date notifications, check the [Azure Virtual Network updates]
Service endpoints provide the following benefits: -- **Improved security for your Azure service resources**: VNet private address spaces can overlap. You can't use overlapping spaces to uniquely identify traffic that originates from your VNet. Service endpoints provide the ability to secure Azure service resources to your virtual network by extending VNet identity to the service. Once you enable service endpoints in your virtual network, you can add a virtual network rule to secure the Azure service resources to your virtual network. The rule addition provides improved security by fully removing public internet access to resources and allowing traffic only from your virtual network.
+- **Improved security for your Azure service resources**: VNet private address spaces can overlap. You can't use overlapping spaces to uniquely identify traffic that originates from your VNet. Service endpoints enable securing of Azure service resources to your virtual network by extending VNet identity to the service. Once you enable service endpoints in your virtual network, you can add a virtual network rule to secure the Azure service resources to your virtual network. The rule addition provides improved security by fully removing public internet access to resources and allowing traffic only from your virtual network.
- **Optimal routing for Azure service traffic from your virtual network**: Today, any routes in your virtual network that force internet traffic to your on-premises and/or virtual appliances also force Azure service traffic to take the same route as the internet traffic. Service endpoints provide optimal routing for Azure traffic. Endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network. Keeping traffic on the Azure backbone network allows you to continue auditing and monitoring outbound Internet traffic from your virtual networks, through forced-tunneling, without impacting service traffic. For more information about user-defined routes and forced-tunneling, see [Azure virtual network traffic routing](virtual-networks-udr-overview.md).-- **Simple to set up with less management overhead**: You no longer need reserved, public IP addresses in your virtual networks to secure Azure resources through IP firewall. There are no Network Address Translation (NAT) or gateway devices required to set up the service endpoints. You can configure service endpoints through a simple click on a subnet. There's no additional overhead to maintaining the endpoints.
+- **Simple to set up with less management overhead**: You no longer need reserved, public IP addresses in your virtual networks to secure Azure resources through IP firewall. There are no Network Address Translation (NAT) or gateway devices required to set up the service endpoints. You can configure service endpoints through a single selection on a subnet. There's no extra overhead to maintaining the endpoints.
## Limitations - The feature is available only to virtual networks deployed through the Azure Resource Manager deployment model. - Endpoints are enabled on subnets configured in Azure virtual networks. Endpoints can't be used for traffic from your premises to Azure services. For more information, see [Secure Azure service access from on-premises](#secure-azure-services-to-virtual-networks) - For Azure SQL, a service endpoint applies only to Azure service traffic within a virtual network's region. For Azure Storage, you can [enable access to virtual networks in other regions](../storage/common/storage-network-security.md?tabs=azure-portal) in preview. -- For Azure Data Lake Storage (ADLS) Gen 1, the VNet Integration capability is only available for virtual networks within the same region. Also note that virtual network integration for ADLS Gen1 uses the virtual network service endpoint security between your virtual network and Azure Active Directory (Azure AD) to generate additional security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access. The *Microsoft.AzureActiveDirectory* tag listed under services supporting service endpoints is used only for supporting service endpoints to ADLS Gen 1. Azure AD doesn't support service endpoints natively. For more information about Azure Data Lake Store Gen 1 VNet integration, see [Network security in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+- For Azure Data Lake Storage (ADLS) Gen 1, the VNet Integration capability is only available for virtual networks within the same region. Also note that virtual network integration for ADLS Gen1 uses the virtual network service endpoint security between your virtual network and Azure Active Directory (Azure AD) to generate extra security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access. The *Microsoft.AzureActiveDirectory* tag listed under services supporting service endpoints is used only for supporting service endpoints to ADLS Gen 1. Azure AD doesn't support service endpoints natively. For more information about Azure Data Lake Store Gen 1 VNet integration, see [Network security in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
## Secure Azure services to virtual networks
Service endpoints can be configured on virtual networks independently by a user
For more information about built-in roles, see [Azure built-in roles](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json). For more information about assigning specific permissions to custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-Virtual networks and Azure service resources can be in the same or different subscriptions. Certain Azure Services (not all) such as Azure Storage and Azure Key Vault also support service endpoints across different Active Directory(AD) tenants i.e., the virtual network and Azure service resource can be in different Active Directory (AD) tenants. Please check individual service documentation for more details.
+Virtual networks and Azure service resources can be in the same or different subscriptions. Certain Azure Services (not all) such as Azure Storage and Azure Key Vault also support service endpoints across different Active Directory(AD) tenant. This means the virtual network and Azure service resource can be in different Active Directory (AD) tenants. Check individual service documentation for more details.
## Pricing and limits
-There's no additional charge for using service endpoints. The current pricing model for Azure services (Azure Storage, Azure SQL Database, etc.) applies as-is today.
+There's no extra charge for using service endpoints. The current pricing model for Azure services (Azure Storage, Azure SQL Database, etc.) applies as-is today.
There's no limit on the total number of service endpoints in a virtual network.
virtual-wan Virtual Wan Point To Site Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md
Previously updated : 10/11/2022 Last updated : 10/19/2022
A User VPN configuration defines the parameters for connecting remote clients. I
1. Navigate to your **Virtual WAN ->User VPN configurations** page and click **+Create user VPN config**.
- :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/user-vpn.png" alt-text="Screenshot of the Create User V P N configuration.":::
+ :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/user-vpn.png" alt-text="Screenshot of the Create User V P N configuration." lightbox="./media/virtual-wan-point-to-site-azure-ad/user-vpn.png":::
1. On the **Basics** page, specify the parameters.
- :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/basics.png" alt-text="Screenshot of the Basics page.":::
+ :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/basics.png" alt-text="Screenshot of the Basics page." lightbox="./media/virtual-wan-point-to-site-azure-ad/basics.png":::
* **Configuration name** - Enter the name you want to call your User VPN Configuration. * **Tunnel type** - Select OpenVPN from the dropdown menu. 1. Click **Azure Active Directory** to open the page.
- :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/values.png" alt-text="Screenshot of the Azure Active Directory page.":::
+ :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/values.png" alt-text="Screenshot of the Azure Active Directory page." lightbox="./media/virtual-wan-point-to-site-azure-ad/values.png":::
Toggle **Azure Active Directory** to **Yes** and supply the following values based on your tenant details. You can view the necessary values on the Azure Active Directory page for Enterprise applications in the portal. * **Authentication method** - Select Azure Active Directory.
A User VPN configuration defines the parameters for connecting remote clients. I
## <a name="site"></a>Create an empty hub
-For this exercise, we create an empty virtual hub. In the next section, you add a gateway to an already existing hub. However, it's also possible to combine these steps and create the hub with the P2S gateway settings all at once. After configuring the settings, click **Review + create** to validate, then **Create**.
+For this exercise, we create an empty virtual hub in this step and, in the next section, you add a P2S gateway to this hub. However, you can combine these steps and create the hub with the P2S gateway settings all at once. The result is the same either way. After configuring the settings, click **Review + create** to validate, then **Create**.
[!INCLUDE [Create an empty hub](../../includes/virtual-wan-hub-basics.md)] ## <a name="hub"></a>Add a P2S gateway to a hub
-This section shows you how to add a gateway to an already existing virtual hub. This step can take up to 30 minutes for the hub to complete updating.
+This section shows you how to add a gateway to an already existing virtual hub. This step can take up to 30 minutes for the hub to complete updating.
1. Navigate to the **Hubs** page under the virtual WAN.
-1. Select the hub to which you want to associate the VPN server configuration and click the ellipsis (**...**) to show the menu. Then, click **Edit virtual hub**.
-
- :::image type="content" source="media/virtual-wan-point-to-site-azure-ad/select-hub.png" alt-text="Screenshot shows Edit virtual hub selected from the menu." lightbox="media/virtual-wan-point-to-site-azure-ad/select-hub.png":::
-
+1. Click the name of the hub that you want to edit to open the page for the hub.
+1. Click **Edit virtual hub** at the top of the page to open the **Edit virtual hub** page.
1. On the **Edit virtual hub** page, check the checkboxes for **Include vpn gateway for vpn sites** and **Include point-to-site gateway** to reveal the settings. Then configure the values.
- :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/edit-virtual-hub.png" alt-text="Screenshot shows the Edit virtual hub page.":::
+ :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/hub.png" alt-text="Screenshot shows the Edit virtual hub." lightbox="./media/virtual-wan-point-to-site-azure-ad/hub.png":::
* **Gateway scale units**: Select the Gateway scale units. Scale units represent the aggregate capacity of the User VPN gateway. If you select 40 or more gateway scale units, plan your client address pool accordingly. For information about how this setting impacts the client address pool, see [About client address pools](about-client-address-pools.md). For information about gateway scale units, see the [FAQ](virtual-wan-faq.md#for-user-vpn-point-to-site--how-many-clients-are-supported). * **User VPN configuration**: Select the configuration that you created earlier.
- * **Client address pool**: Specify the client address pool from which the VPN clients will be assigned IP addresses. This setting corresponds to the gateway scale units that you set.
-1. Click **Confirm**. It can take up to 30 minutes to update the hub.
+ * **User Groups to Address Pools Mapping**: For information about this setting, see [Configure user groups and IP address pools for P2S User VPNs (preview)](user-groups-create.md).
+
+1. After configuring the settings, click **Confirm** to update the hub. It can take up to 30 minutes to update a hub.
## <a name="connect-vnet"></a>Connect VNet to hub
In this section, you create a connection between your virtual hub and your VNet.
## <a name="download-profile"></a>Download User VPN profile
-All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. You can download global (WAN-level) profiles, or a profile for a specific hub. For information and additional instructions, see [Download global and hub profiles](global-hub-profile.md). The following steps walk you through downloading a global WAN-level profile.
+All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. You can download global (WAN-level) profiles, or a profile for a specific hub. For information and additional instructions, see [Download global and hub profiles](global-hub-profile.md). The following steps walk you through downloading a global WAN-level profile.
[!INCLUDE [Download profile](../../includes/virtual-wan-p2s-download-profile-include.md)]
When you no longer need the resources that you created, delete them. Some of the
## Next steps
-To learn more about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md) page.
+For Virtual WAN frequently asked questions, see the [Virtual WAN FAQ](virtual-wan-faq.md).
web-application-firewall Waf Front Door Exclusion Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion-configure.md
+
+ Title: Configure WAF exclusion lists for Front Door
+description: Learn how to configure a WAF exclusion list for an existing Front Door endpoint.
+++ Last updated : 10/18/2022++
+zone_pivot_groups: web-application-firewall-configuration
++
+# Configure Web Application Firewall exclusion lists
+
+Sometimes the Front Door Web Application Firewall (WAF) might block a legitimate request. As part of tuning your WAF, you can configure the WAF to allow the request for your application. WAF exclusion lists allow you to omit specific request attributes from a WAF evaluation. The rest of the request is evaluated as normal. For more information about exclusion lists, see [Web Application Firewall (WAF) with Front Door exclusion lists](waf-front-door-exclusion.md).
+
+An exclusion list can be configured by using [Azure PowerShell](/powershell/module/az.frontdoor/New-AzFrontDoorWafManagedRuleExclusionObject), the [Azure CLI](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion#az-network-front-door-waf-policy-managed-rules-exclusion-add), the [REST API](/rest/api/frontdoorservice/webapplicationfirewall/policies/createorupdate), Bicep, ARM templates, and the Azure portal.
+
+## Scenario
+
+Suppose you've created an API. Your clients send requests to your API that include headers with names like `userid` and `user-id`.
+
+While tuning your WAF, you've noticed that some legitimate requests have been blocked because the user headers included character sequences that the WAF detected as SQL injection attacks. Specifically, rule ID 942230 detects the request headers and blocks the requests. [Rule 942230 is part of the SQLI rule group.](waf-front-door-drs.md#drs942-20)
+
+You decide to create an exclusion to allow these legitimate requests to pass through without the WAF blocking them.
++
+## Create an exclusion
+
+1. Open your Front Door WAF policy.
+
+1. Select **Managed rules**, and then select **Manage exclusions** on the toolbar.
+
+ :::image type="content" source="../media/waf-front-door-exclusion-configure/managed-rules-exclusion.png" alt-text="Screenshot of the Azure portal showing the WAF policy's managed rules page, with the 'Manage exclusions' button highlighted." :::
+
+1. Select the **Add** button.
+
+ :::image type="content" source="../media/waf-front-door-exclusion-configure/exclusion-add.png" alt-text="Screenshot of the Azure portal showing the exclusion list, with the Add button highlighted." :::
+
+1. Configure the exclusion's **Applies to** section as follows:
+
+ | Field | Value |
+ |-|-|
+ | Rule set | Microsoft_DefaultRuleSet_2.0 |
+ | Rule group | SQLI |
+ | Rule | 942230 Detects conditional SQL injection attempts |
+
+1. Configure the exclusion match conditions as follows:
+
+ | Field | Value |
+ |-|-|
+ | Match variable | Request header name |
+ | Operator | Starts with |
+ | Selector | user |
+
+1. Review the exclusion, which should look like the following screenshot:
+
+ :::image type="content" source="../media/waf-front-door-exclusion-configure/exclusion-details.png" alt-text="Screenshot of the Azure portal showing the exclusion configuration." :::
+
+ This exclusion applies to any request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion. If WAF rule 942230 detects a risk in these header values, it ignores the header and moves on.
+
+1. Select **Save**.
+++
+## Define an exclusion selector
+
+Use the [New-AzFrontDoorWafManagedRuleExclusionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleexclusionobject) cmdlet to define a new exclusion selector.
+
+The following example identifies request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion.
+
+```azurepowershell
+$exclusionSelector = New-AzFrontDoorWafManagedRuleExclusionObject `
+ -Variable RequestHeaderNames `
+ -Operator StartsWith `
+ -Selector 'user'
+```
+
+## Define a per-rule exclusion
+
+Use the [New-AzFrontDoorWafManagedRuleOverrideObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleoverrideobject) cmdlet to define a new per-rule exclusion, which includes the selector you created in the previous step.
+
+The following example creates an exclusion for rule ID 942230.
+
+```azurepowershell
+$exclusion = New-AzFrontDoorWafManagedRuleOverrideObject `
+ -RuleId '942230' `
+ -Exclusion $exclusionSelector
+```
+
+## Apply the exclusion to the rule group
+
+Use the [New-AzFrontDoorWafRuleGroupOverrideObject](/powershell/module/az.frontdoor/new-azfrontdoorwafrulegroupoverrideobject) cmdlet to create a rule group override, which applies the exclusion to the appropriate rule group.
+
+The example below uses the SQLI rule group, because that group contains rule ID 942230.
+
+```azurepowershell
+$ruleGroupOverride = New-AzFrontDoorWafRuleGroupOverrideObject `
+ -RuleGroupName 'SQLI' `
+ -ManagedRuleOverride $exclusion
+```
+
+## Configure the managed rule set
+
+Use the [New-AzFrontDoorWafManagedRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleobject) cmdlet to configure the managed rule set, including the rule group override that you created in the previous step.
+
+The example below configures the DRS 2.0 rule set with the rule group override and its exclusion.
+
+```azurepowershell
+$managedRuleSet = New-AzFrontDoorWafManagedRuleObject `
+ -Type 'Microsoft_DefaultRuleSet' `
+ -Version '2.0' `
+ -Action Block `
+ -RuleGroupOverride $ruleGroupOverride
+```
+
+## Apply the managed rule set configuration to the WAF profile
+
+Use the [Update-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/update-azfrontdoorwafpolicy) cmdlet to update your WAF policy to include the configuration you created above. Ensure that you use the correct resource group name and WAF policy name for your own environment.
+
+```azurepowershell
+Update-AzFrontDoorWafPolicy `
+ -ResourceGroupName 'FrontDoorWafPolicy' `
+ -Name 'WafPolicy'
+ -ManagedRule $managedRuleSet
+```
+++
+## Create an exclusion
+
+Use the [`az network front-door waf-policy managed-rules exclusion add`](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion) command to update your WAF policy to add a new exclusion.
+
+The exclusion identifies request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion.
+
+Ensure that you use the correct resource group name and WAF policy name for your own environment.
+
+```azurecli
+az network front-door waf-policy managed-rules exclusion add \
+ --resource-group FrontDoorWafPolicy \
+ --policy-name WafPolicy \
+ --type Microsoft_DefaultRuleSet \
+ --rule-group-id SQLI \
+ --rule-id 942230 \
+ --match-variable RequestHeaderNames \
+ --operator StartsWith \
+ --value user
+```
+++
+## Example Bicep file
+
+The following example Bicep file shows how to do the following steps:
+
+- Create a Front Door WAF policy.
+- Enable the DRS 2.0 rule set.
+- Configure an exclusion for rule 942230, which exists within the SQLI rule group. This exclusion applies to any request headers that start with the word `user`. The match condition is case insensitive, so headers that start with `User` are also covered by the exclusion. If WAF rule 942230 detects a risk in these header values, it ignores the header and moves on.
+
+```bicep
+param wafPolicyName string = 'WafPolicy'
+
+@description('The mode that the WAF should be deployed using. In "Prevention" mode, the WAF will block requests it detects as malicious. In "Detection" mode, the WAF will not block requests and will simply log the request.')
+@allowed([
+ 'Detection'
+ 'Prevention'
+])
+param wafMode string = 'Prevention'
+
+resource wafPolicy 'Microsoft.Network/frontDoorWebApplicationFirewallPolicies@2022-05-01' = {
+ name: wafPolicyName
+ location: 'Global'
+ sku: {
+ name: 'Premium_AzureFrontDoor'
+ }
+ properties: {
+ policySettings: {
+ enabledState: 'Enabled'
+ mode: wafMode
+ }
+ managedRules: {
+ managedRuleSets: [
+ {
+ ruleSetType: 'Microsoft_DefaultRuleSet'
+ ruleSetVersion: '2.0'
+ ruleSetAction: 'Block'
+ ruleGroupOverrides: [
+ {
+ ruleGroupName: 'SQLI'
+ rules: [
+ {
+ ruleId: '942230'
+ enabledState: 'Enabled'
+ action: 'AnomalyScoring'
+ exclusions: [
+ {
+ matchVariable: 'RequestHeaderNames'
+ selectorMatchOperator: 'StartsWith'
+ selector: 'user'
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+```
++
+## Next steps
+
+- Learn more about [Front Door](../../frontdoor/front-door-overview.md).
web-application-firewall Waf Front Door Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion.md
Title: Web application firewall exclusion lists in Azure Front Door - Azure portal
-description: This article provides information on exclusion lists configuration in Azure Front with the Azure portal.
+ Title: Web application firewall exclusion lists in Azure Front Door
+description: This article provides information on exclusion lists configuration in Azure Front Door.
Previously updated : 08/03/2022 Last updated : 10/18/2022 # Web Application Firewall (WAF) with Front Door exclusion lists
-Sometimes Web Application Firewall (WAF) might block a request that you want to allow for your application. WAF exclusion lists allow you to omit certain request attributes from a WAF evaluation. The rest of the request is evaluated as normal.
+Sometimes the Front Door Web Application Firewall (WAF) might block a legitimate request. As part of tuning your WAF, you can configure the WAF to allow the request for your application. WAF exclusion lists allow you to omit specific request attributes from a WAF evaluation. The rest of the request is evaluated as normal.
-For example, Active Directory inserts tokens that are used for authentication. When used in a request header, these tokens can contain special characters that may trigger a false positive from the WAF rules. By adding the header to an exclusion list, you can configure WAF to ignore the header, but WAF still evaluates the rest of the request.
+For example, Azure Active Directory provides tokens that are used for authentication. When used in a request header, these tokens can contain special characters that might trigger a false positive detection by one or more WAF rules. You can add the header to an exclusion list, which tells the WAF to ignore the header. The WAF still inspects the rest of the request for suspicious content.
-An exclusion list can be configured using [PowerShell](/powershell/module/az.frontdoor/New-AzFrontDoorWafManagedRuleExclusionObject), [Azure CLI](/cli/azure/network/front-door/waf-policy/managed-rules/exclusion#az-network-front-door-waf-policy-managed-rules-exclusion-add), [REST API](/rest/api/frontdoorservice/webapplicationfirewall/policies/createorupdate), or the Azure portal. The following example shows the Azure portal configuration.
+## Exclusion scopes
-## Configure exclusion lists using the Azure portal
+You can create exclusions at the following scopes:
-**Manage exclusions** is accessible from WAF portal under **Managed rules**
+- **Rule set** exclusions apply to all rules within a rule set.
+- **Rule group** exclusions apply to all of the rules of a particular category within a rule set. For example, you can configure an exclusion that applies to all of the SQL injection rules.
+- **Rule** exclusions apply to a single rule.
-![Manage exclusion](../media/waf-front-door-exclusion/exclusion1.png)
-![Manage exclusion_add](../media/waf-front-door-exclusion/exclusion2.png)
+## Exclusion selectors
- An example exclusion list:
-![Manage exclusion_define](../media/waf-front-door-exclusion/exclusion3.png)
+Exclusion selectors identify the parts of requests that the exclusion applies to. The WAF ignores any detections that it finds in the specified parts of the request. You can specify multiple exclusion selectors in a single exclusion.
-This example excludes the value in the *user* header field. A valid request may include the *user* field that contains a string that triggers a SQL injection rule. You can exclude the *user* parameter in this case so that the WAF rule doesn't evaluate anything in the field.
+Each exclusion selector specified a match variable, an operator, and a selector.
-The following attributes can be added to exclusion lists by name. The values of the fields you use aren't evaluated against WAF rules, but their names are evaluated. The exclusion lists remove inspection of the field's value.
+### Match variables
+
+The following request attributes can be added to an exclusion:
* Request header name * Request cookie name * Query string args name
-* Request body post args name
-* RequestBodyJSONArgNames
+* Request body POST args name
+* Request body JSON args name *(supported on DRS 2.0 or greater)*
+
+The values of the fields you use aren't evaluated against WAF rules, but their names are evaluated. The exclusion lists disable inspection of the field's value. However, the field names are still evaluated. For more information, see [Excluding other request attributes](#exclude-other-request-attributes).
+
+### Operators
+
+You can specify an exact request header, body, cookie, or query string attribute to match. Or, you can optionally specify partial matches. The following operators are supported for match criteria:
+
+- **Equals**: Match all request fields that exactly match the specified selector value. For example, to select a header named **bearerToken**, use the *Equals* operator with the selector set to **bearerToken**.
+- **Starts with**: Match all request fields that start with the specified selector value.
+- **Ends with**: Match all request fields that end with the specified selector value.
+- **Contains**: Match all request fields that contain the specified selector value.
+- **Equals any**: Match all request fields. When you use the *Equals any* operator, the selector value is automatically set to _*_. For example, you can use the *Equals any* operator to configure an exclusion that applies to all request headers.
+
+### Case sensitivity
+
+Header and cookie names are case insensitive. Query strings, POST arguments, and JSON arguments are case sensitive.
+
+### Body contents inspection
+
+Some of the managed rules evaluate the raw payload of the request body, before it's parsed into POST arguments or JSON arguments. So, in some situations you might see log entries with a matchVariableName of `InitialBodyContents`.
+
+For example, suppose you create an exclusion with a match variable of *Request body POST args* and a selector to identify and ignore POST arguments named *FOO*. You'll no longer see any log entries with a matchVariableName of `PostParamValue:FOO`. However, if a POST argument named *FOO* contains text that triggers a rule, the log might show the detection in the initial body contents.
->[!NOTE]
->RequestBodyJSONArgNames is only available on Default Rule Set (DRS) 2.0 or later.
+## <a name="define-exclusion-based-on-web-application-firewall-logs"></a> Define exclusion rules based on Web Application Firewall logs
-You can specify an exact request header, body, cookie, or query string attribute match. Or, you can optionally specify partial matches. The following operators are the supported match criteria:
+[Azure Web Application Firewall monitoring and logging](waf-front-door-monitor.md) describes how you can use logs to view the details of a blocked request, including the parts of the request that triggered the rule.
-- **Equals**: This operator is used for an exact match. For example, to select a header named **bearerToken**, use the equals operator with the selector set as **bearerToken**.-- **Starts with**: This operator matches all fields that start with the specified selector value.-- **Ends with**: This operator matches all request fields that end with the specified selector value.-- **Contains**: This operator matches all request fields that contain the specified selector value.-- **Equals any**: This operator matches all request fields. * is the selector value.
+Sometimes a specific WAF rule produces false positive detections from the values included in a request header, cookie, POST argument, query string argument, or JSON field in a request body. If these false positive detections happen, you can configure the rule to exclude the relevant part of the request from its evaluation.
-Header and cookie names are case insensitive.
+The following table shows example values from WAF logs and the corresponding exclusion selectors that you could create.
-If a header value, cookie value, post argument value, or query argument value produces false positives for some rules, you can exclude that part of the request from consideration by the rule:
+| matchVariableName from WAF logs | Rule exclusion in Portal |
+|-|-|
+| CookieValue:SOME_NAME | Request cookie name Equals SOME_NAME |
+| HeaderValue:SOME_NAME | Request header name Equals SOME_NAME |
+| PostParamValue:SOME_NAME | Request body POST args name Equals SOME_NAME |
+| QueryParamValue:SOME_NAME | Query string args name Equals SOME_NAME |
+| SOME_NAME | Request body JSON args name Equals SOME_NAME |
+### Exclusions for JSON request bodies
-|matchVariableName from WAF logs |Rule exclusion in Portal |
-|||
-|CookieValue:SOME_NAME |Request cookie name Equals SOME_NAME|
-|HeaderValue:SOME_NAME |Request header name Equals SOME_NAME|
-|PostParamValue:SOME_NAME |Request body post args name Equals SOME_NAME|
-|QueryParamValue:SOME_NAME |Query string args name Equals SOME_NAME|
+From DRS version 2.0, JSON request bodies are inspected by the WAF. For example, consider this JSON request body:
+```json
+{
+ "posts": [
+ {
+ "id": 1,
+ "comment": ""
+ },
+ {
+ "id": 2,
+ "comment": "\"1=1\""
+ }
+ ]
+}
+```
-We currently only support rule exclusions for the above matchVariableNames in their WAF logs. For any other matchVariableNames, you must either disable rules that give false positives, or create a custom rule that explicitly allows those requests. In particular, when the matchVariableName is CookieName, HeaderName, PostParamName, or QueryParamName, it means the name itself is triggering the rule. Rule exclusion has no support for these matchVariableNames at this time.
+The request includes a SQL comment character sequence, which the WAF detects as a potential SQL injection attack.
+If you determine that the request is legitimate, you could create an exclusion with a match variable of *Request body JSON args name*, an operator of *Equals*, and a selector of *posts.comment*.
-If you exclude a Request body post args named *FOO*, no rule should show PostParamValue:FOO as the matchVariableName in your WAF logs. However, you may still see a rule with matchVariableName InitialBodyContents which matches on the value of the post param FOO since post param values are part of the InitialBodyContents.
+## Exclude other request attributes
-You can apply exclusion lists to all rules within the managed rule set, to rules for a specific rule group, or to a single rule as shown in the previous example.
+If your WAF log entry shows a matchVariableName that isn't in the table above, you can't create an exclusion. For example, you can't currently create exclusions for cookie names, header names, POST parameter names, or query parameter names.
-## Define exclusion based on Web Application Firewall Logs
- [Azure Web Application Firewall monitoring and logging](waf-front-door-monitor.md) shows matched details of a blocked request. If a header value, cookie value, post argument value, or query argument value produces false positives for some rules, you can exclude that part of the request from being considered by the rule. The following table shows example values from WAF logs and the corresponding exclusion conditions.
+Instead, consider taking one of the following actions:
-|matchVariableName from WAF logs |Rule exclusion in Portal|
-|--||
-|CookieValue:SOME_NAME |Request cookie name Equals SOME_NAME|
-|HeaderValue:SOME_NAME |Request header name Equals SOME_NAME|
-|PostParamValue:SOME_NAME| Request body post args name Equals SOME_NAME|
-|QueryParamValue:SOME_NAME| Query string args name Equals SOME_NAME|
+- Disable the rules that give false positives.
+- Create a custom rule that explicitly allows those requests. The requests bypass all WAF inspection.
+In particular, when the matchVariableName is `CookieName`, `HeaderName`, `PostParamName`, or `QueryParamName`, it means the name of the field, rather than its value, has triggered the rule. Rule exclusion has no support for these matchVariableNames at this time.
## Next steps
-After you configure your WAF settings, learn how to view your WAF logs. For more information, see [Front Door diagnostics](../afds/waf-front-door-monitor.md).
+- [Configure exclusion lists on your Front Door WAF](waf-front-door-exclusion-configure.md)
+- After you configure your WAF settings, learn how to view your WAF logs. For more information, see [Front Door diagnostics](../afds/waf-front-door-monitor.md).